You are on page 1of 63

Cloud seeding

Cloud seeding, a form of weather modification, is the attempt to change the amount or
type of precipitation that falls from clouds, by dispersing substances into the air that serve
as cloud condensation or ice nuclei, which alter the microphysical processes within the
cloud. The usual intent is to increase precipitation (rain or snow), but hail and fog
suppression are also widely practiced in airports.

How cloud seeding works ?

A ground-based Silver Iodide generator


The most common chemicals used for cloud seeding include silver iodide and dry ice
(frozen carbon dioxide). The expansion of liquid propane into a gas has also been used
and can produce ice crystals at warmer temperatures than silver iodide. The use of
hygroscopic materials, such as salt, is increasing in popularity because of some promising
research results.

Seeding of clouds requires that they contain supercooled liquid water—that is, liquid
water colder than zero degrees Celsius. Introduction of a substance such as silver iodide,
which has a crystalline structure similar to that of ice, will induce freezing nucleation.
Dry ice or propane expansion cools the air to such an extent that ice crystals can nucleate
spontaneously from the vapor phase. Unlike seeding with silver iodide, this spontaneous
nucleation does not require any existing droplets or particles because it produces
extremely high vapor supersaturations near the seeding substance. However, the existing
droplets are needed for the ice crystals to grow into large enough particles to precipitate
out.

In mid-latitude clouds, the usual seeding strategy has been predicated upon the fact that
the equilibrium vapor pressure is lower over ice than over water. When ice particles form
in supercooled clouds, this fact allows the ice particles to grow at the expense of liquid
droplets. If there is sufficient growth, the particles become heavy enough to fall as snow
(or, if melting occurs, rain) from clouds that otherwise would produce no precipitation.
This process is known as "static" seeding.

Seeding of warm-season or tropical cumuliform (convective) clouds seeks to exploit the


latent heat released by freezing. This strategy of "dynamic" seeding assumes that the
additional latent heat adds buoyancy, strengthens updrafts, ensures more low-level
convergence, and ultimately causes rapid growth of properly selected clouds.

Cloud seeding chemicals may be dispersed by aircraft (as in the second figure) or by
dispersion devices located on the ground (generators, as in first figure, or canisters fired
from anti-aircraft guns or rockets). For release by aircraft, silver iodide flares are ignited
and dispersed as an aircraft flies through the inflow of a cloud. When released by devices
on the ground, the fine particles are carried downwind and upwards by air currents after
release.

Effectiveness

Referring to the 1903, 1915, 1919 and 1944 and 1947 weather modification experiments,
the Federation of Meteorology discounted "rain making". By the 1950s the CSIRO
Division of Radiophysics switched to investigating the physics of clouds and had hoped
by 1957 to be masters of the weather. By the 1960s the dreams of weather making had
truly faded only to be re-ignited post-corporatisation of the Snowy Mountains Scheme in
order to achieve "above target" water for energy generation and profits.

While cloud seeding has shown to be effective in altering cloud structure and size, and
converting cloud water to ice particles, it is more controversial whether cloud seeding
increases the amount of precipitation at the ground. Cloud seeding may also suppress
precipitation.[citation needed]

Part of the problem is that it is difficult to discern how much precipitation would have
occurred had the cloud not been seeded. There are no discernible "traces" of the
effectiveness of recent cloud seeding in the Snowy Mountains Australia. Nevertheless,
there is hope that winter cloud seeding over mountains will produce snow. This statement
arises from partial interpretation of professional societies Weather Modification
Association, World Meteorological Organization, and American Meteorological Society
(AMS). The AMS states that there is statistical evidence for seasonal precipitation
increases of about 10% with winter seeding [1], however, this clearly does not apply to
all cloud seeding activities. The World Meteorological Organization has indicated that
cloud seeding does not produce positive results in all cases and is dependent on
specificity of clouds, wind speed and direction, terrain and other factors.

The National Center for Atmospheric Research (NCAR), an institution in Boulder,


Colorado, has made some statistical analysis of seeded and unseeded clouds in an attempt
to understand the differences between them. They have conducted seeding research in
several countries that include Mali, Saudi Arabia, Mexico, South Africa, Thailand, Italy,
and Argentina.

It has also been said that in the 2008 summer Olympics in Beijing clouds were seeded so
that there will be no rain during the opening ceremony.[1] The Chinese weather
modification office rarely publishes in the open scientific literature and therefore their
claims of success are widely disputed.

Impact on environment and health

With an NFPA 704 rating of Blue 2, silver iodide can cause temporary incapacitation or
possible residual injury (e.g., chloroform) to humans and mammals with intense or
continued but not chronic exposure. However, there have been several detailed ecological
studies that showed negligible environmental and health impacts. [2][3][4]. The toxicity of
silver and silver compounds (from silver iodide) was shown to be of low order in some
studies. These findings likely result from the minute amounts of silver generated by cloud
seeding, which are 100 times less than industry emissions into the atmosphere in many
parts of the world, or individual exposure from tooth fillings[5].

Accumulations in the soil, vegetation, and surface runoff have not been large enough to
measure above natural background[6]. A 1995 environmental assessment in the Sierra
Nevada of California[7] and a 2004 independent panel of experts in Australia confirmed
these earlier findings. The paper does include the names of the experts, their scientific
qualifications or published research papers to support the assertion that cloud seeding will
have no ecotoxic impacts or affect alpine waterways.

Cloud seeding over Kosciuszko National Park - a Biosphere Reserve - is problematic in


that several rapid changes of environmental legislation were made to enable the "trial".
Environmentalists are concerned about the uptake of silver in a highly sensitive
environment affecting the pygmy possum amongst other species as well as recent high
level algal blooms in once pristine glacial lakes. The ABC program Earthbeat on 17 July
2004 heard that not every cloud has a silver lining where concerns for the health of the
pygmy possums was raised. Earlier research and analysis by the former Snowy
Mountains Authority led to the cessation of the cloud seeding program in the 1950s with
non-definitive results. Formerly, cloud seeding was rejected in Australia on
environmental grounds because of concerns about the protected species, the pygmy
possum.

History

Cessna 210 with cloud seeding equipment

Vincent Schaefer (1906–1993) discovered the principle of


cloud seeding in July 1946 through a series of serendipitous
events. Following ideas generated between himself and
Nobel laureate Irving Langmuir while climbing Mt.
Washington in New Hampshire, Schaefer, Langmuir's research associate, created a way
of experimenting with supercooled clouds using a deep freeze unit lined with black
velveteen. He tried hundreds of potential agents to stimulate ice crystal growth, i.e., salt,
talcum powder, soils, dust and various chemical agents with minor effect. Then one hot
and humid July day he wanted to try a few experiments at General Electric's Schenectady
Research Lab. He was dismayed to find that the deep freezer was not cold enough to
produce a cloud using breath air. He decided to move the process along by adding a
chunk of dry ice just to lower the temperature. To his astonishment, as soon as he
breathed into the chamber, a bluish haze was noted, followed by an eye-popping display
of millions of tiny ice crystals, reflecting the strong light rays illuminating a cross-section
of the chamber. He instantly realized that he had discovered a way to change supercooled
water into ice crystals. The experiment was easily replicated and he explored the
temperature gradient to establish the −40˚C[8] limit for liquid water. Within the month,
Schaefer's colleague, the noted atmospheric scientist Dr. Bernard Vonnegut (brother of
novelist Kurt Vonnegut) is credited with discovering another method for "seeding"
supercooled cloud water. Vonnegut accomplished his discovery at the desk, looking up
information in a basic chemistry text and then tinkering with silver and iodide chemicals
to produce silver iodide. Together with Dr. Vonnegut, Professor Henry Chessin, SUNY
Albany, a crystallographer, co-authored a publication in Science Magazine (Science 26
November 1971: Vol. 174. no. 4012, pp. 945 - 946 DOI: 10.1126/science.174.4012.945.
Ice Nucleation by Coprecipitated Silver Iodide and Silver Bromide B. Vonnegut 1 and
Henry Chessin 2) and received a patent in 1975, ("Freezing Nucleant", Bernard
Vonnegut, Henry Chessin, and Richard E. Passarelli, Jr., #3,877,642, April 15, 1975. Both
methods were adopted for use in cloud seeding during 1946 while working for the
General Electric Corporation in the state of New York. Schaefer's altered a cloud's heat
budget, Vonnegut's altered formative crystal structure – an ingenious property related to a
good match in lattice constant between the two types of crystal. (The crystallography of
ice later played a role in Kurt Vonnegut's novel Cat's Cradle.) The first attempt to modify
natural clouds in the field through "cloud seeding" began during a flight that began in
upstate New York on 13 November 1946. Schaefer was able to cause snow to fall near
Mount Greylock in western Massachusetts, after he dumped six pounds of dry ice into the
target cloud from a plane after a 60 mile easterly chase from the Schenectady County
Airport.[9]

Dry ice and silver iodide agents are effective in changing the physical chemistry of
supercooled clouds, thus useful in augmentation of winter snowfall over mountains and
under certain conditions, lightning and hail suppression. While not a new technique
hygroscopic seeding for enhancement of rainfall in warm clouds is enjoying a revival,
based on some positive indications from research in South Africa, Mexico, and
elsewhere. The hygroscopic material most commonly used is salt. It is postulated that
hygroscopic seeding causes the droplet size spectrum in clouds to become more maritime
(bigger drops) and less continental, stimulating rainfall through coalescence From March
1967 until July 1972, the U.S. military's Operation Popeye cloud seeded silver iodide to
extend the monsoon season over North Vietnam, specifically the Ho Chi Minh Trail. The
operation resulted in the targeted areas seeing an extension of the monsoon period an
average of 30 to 45 days.[2] The 54th Weather Reconnaissance Squadron carried out the
operation to "make mud, not war". [3]

In 1969 at the Woodstock Festival, various people claimed to have witnessed clouds
being seeded by the U.S. military. This was said to be the cause of the rain which lasted
throughout most of the festival.

One private organization which offered, during the 1970s, to conduct weather
modification (cloud seeding from the ground using silver iodide flares) was Irving P.
Krick and Associates of Palm Springs, California. They were contracted by the Oklahoma
State University in 1972 to conduct such a seeding project to increase warm cloud rainfall
in the Lake Carl Blackwell watershed. That lake was, at that time (1972-73), the primary
water supply for Stillwater, Oklahoma and was dangerously low. The project did not
operate for a long enough time to show statistically any change from natural variations.
However, at the same time, seeding operations have been ongoing in California since
1948.

An attempt by the United States military to modify hurricanes in the Atlantic basin using
cloud seeding in the 1960s was called Project Stormfury. Only a few hurricanes were
tested with cloud seeding because of the strict rules that were set by the scientists of the
project. It was unclear whether the project was successful; hurricanes appeared to change
in structure slightly, but only temporarily. The fear that cloud seeding could potentially
change the course or power of hurricanes and negatively affect people in the storm's path
stopped the project.

Two Federal agencies have supported various weather modification research projects,
which began in the early 1960s: The United States Bureau of Reclamation (Reclamation;
Department of the Interior) and the National Oceanic and Atmospheric Administration
(NOAA; Department of Commerce). Reclamation sponsored several cloud seeding
research projects under the umbrella of Project Skywater from 1964 to 1988, and NOAA
conducted the Atmospheric Modification Program from 1979 to 1993. The sponsored
projects were carried out in several states and two countries (Thailand and Morocco),
studying both winter and summer cloud seeding. More recently, Reclamation sponsored a
small cooperative research program with six Western states called the Weather Damage
Modification Program [4], from 2002–2006.

Funding for research in the United States has declined in the last two decades. The
Bureau of Reclamation sponsored a six-state research program from 2002–2006,
however, called the Weather Damage Modification Program. A 2003 study by the United
States National Academy of Sciences urges a national research program to clear up
remaining questions about weather modification's efficacy and practice.

In Australia, CSIRO conducted major trials between 1947 and the early 1960s:

• 1947 – 1952: CSIRO scientists dropped dry ice into the tops of cumulus clouds.
The method worked reliably with clouds that were very cold, producing rain that
would not have otherwise fallen.

• 1953 – 1956: CSIRO carried out similar trials South Australia, Queensland and
other States. Experiments used both ground-based and airborne silver iodide
generators.

• Late 1950s and early 1960s: Cloud seeding in the Snowy Mountains, on the Cape
York Peninsula in Queensland, in the New England district of New South Wales,
and in the Warragamba catchment area west of Sydney.

Only the trial conducted in the Snowy Mountains produced statistically significant
rainfall increases over the entire experiment.

An Austrian study[10] to use silver iodine seeding for hail prevention ran during 1981–
2000, and the technique is still actively deployed there.[11]

Modern uses

The largest cloud seeding system in the world is that of the People's Republic of China,
which believes that it increases the amount of rain over several increasingly arid regions,
including its capital city, Beijing, by firing silver iodide rockets into the sky where rain is
desired. There is even political strife caused by neighboring regions which accuse each
other of "stealing rain" using cloud seeding. About 24 countries currently practice
weather modification operationally. China used cloud seeding in Beijing just before the
2008 Olympic Games in order to clear the air of pollution, but there are disputes
regarding the Chinese claims. In February 2009, China also blasted iodide sticks over
Beijing to artificially induce snowfall after four months of drought, and blasted iodide
sticks over other areas of northern China to increase snowfall. The snowfall in Beijing,
which rarely experiences snow, lasted for approximately three days and led to the closure
of 12 main roads around Beijing.[12]

In the United States, cloud seeding is used to increase precipitation in areas experiencing
drought, to reduce the size of hailstones that form in thunderstorms, and to reduce the
amount of fog in and around airports. Cloud seeding is also occasionally used by major
ski resorts to induce snowfall. Eleven western states and one Canadian province (Alberta)
have ongoing weather modification operational programs [5]. In January 2006, an $8.8
million cloud seeding project began in Wyoming to examine the effects of cloud seeding
on snowfall over Wyoming's Medicine Bow, Sierra Madre, and Wind River mountain
ranges. [6]

A number of commercial companies, such as Aero Systems Incorporated [7],


Atmospherics Incorporated [8], North American Weather Consultants [9], Weather
Modification Incorporated [10], Weather Enhancement Technologies International [11],
Seeding Operations and Atmospheric Research (SOAR) [12], offer weather modification
services centered on cloud seeding. The USAF proposed its use on the battlefield in 1996,
although the U.S. signed an international treaty in 1978 banning the use of weather
modification for hostile purposes.

This Cessna 441 is used to conduct cloud-seeding flights on


behalf of Hydro Tasmania

In Australia, CSIRO’s activities in Tasmania in the 1960s


were successful[citation needed]. Seeding over the Hydro-
Electricity Commission catchment area on the Central Plateau achieved rainfall increases
as high as 30% in autumn. The Tasmanian experiments were so successful that the
Commission has regularly undertaken seeding ever since in mountainous parts of the
State.

Russian military pilots seeded clouds over Belarus after the Chernobyl disaster to remove
radioactive particles from clouds heading toward Moscow.[13]

Beginning in Winter 2004, Snowy Hydro Limited is conducting a six-year research


project of winter cloud seeding to assess the feasibility of increasing snow precipitation
in the Snowy Mountains in Australia. The NSW Natural Resources Commission,
responsible for supervising the cloud seeding operations, believes that the trial may have
difficulty establishing statistically whether cloud seeding operations are increasing
snowfall. This project was discussed at a summit in Narrabri, NSW on 1 December 2006.
The summit met with the intention of outlining a proposal for a 5 year trial, focussing on
Northern NSW.

The various implications of such a widespread trial were discussed, drawing on the
combined knowledge of several worldwide experts, including representatives from the
Tasmanian Hydro Cloud Seeding Project however does not make reference to former
cloud seeding experiments by the then Snowy Mountains Authority which rejected
weather modification. The trial required changes to NSW environmental legislation in
order to facilitate placement of the cloud seeding apparatus. The modern experiment is
not supported for the Australian Alps.

At the July 2006 G8 Summit, President Putin commented that air force jets had been
deployed to seed incoming clouds so they rained over Finland. Rain drenched the summit
anyway.[14]

In Southeast Asia, open burning produces haze that pollutes the regional environment.
Cloud-seeding has been used to improve the air quality by encouraging rainfall.

In December 2006, the Queensland government of Australia announced AUD$7.6 million


in funding for "warm cloud" seeding research to be conducted jointly by the Australian
Bureau of Meteorology and the United States National Center for Atmospheric
Research.[15] Outcomes of the study are hoped to ease continuing drought conditions in
the states South East region.

In Moscow, the Russian Airforce tried seeding clouds with bags of cement on Jun 17,
2008. One of the bags did not pulverize and went through the roof of a house.[16]

In India, Cloud seeding operations were conducted during the years 2003 and 2004

through U.S. based Weather Modification Inc. in state of Maharashtra [17]. In 2008, there
are plans for 12 districts of state of Andhra Pradesh [18].
Ozone layer
The ozone layer is a layer in Earth's atmosphere which contains relatively high
concentrations of ozone (O3). This layer absorbs 93-99% of the sun's high frequency
ultraviolet light, which is potentially damaging to life on earth.[1] Over 91% of the ozone
in Earth's atmosphere is present here.[1] It is mainly located in the lower portion of the
stratosphere from approximately 10 km to 50 km above Earth, though the thickness
varies seasonally and geographically.[2] The ozone layer was discovered in 1913 by the
French physicists Charles Fabry and Henri Buisson. Its properties were explored in detail
by the British meteorologist G. M. B. Dobson, who developed a simple
spectrophotometer (the Dobsonmeter) that could be used to measure stratospheric ozone
from the ground. Between 1928 and 1958 Dobson established a worldwide network of
ozone monitoring stations which continues to operate today. The "Dobson unit", a
convenient measure of the total amount of ozone in a column overhead, is named in his
honor.
Origin of ozone

Ozone-oxygen cycle in the


ozone layer.

The photochemical mechanisms


that give rise to the ozone layer
were discovered by the British
physicist Sidney Chapman in
1930. Ozone in the earth's
stratosphere is created by
ultraviolet light striking oxygen
molecules containing two
oxygen atoms (O2), splitting
them into individual oxygen
atoms (atomic oxygen); the atomic oxygen then combines with unbroken O2 to create
ozone, O3. The ozone molecule is also unstable (although, in the stratosphere, long-lived)
and when ultraviolet light hits ozone it splits into a molecule of O2 and an atom of atomic
oxygen, a continuing process called the ozone-oxygen cycle, thus creating an ozone layer
in the stratosphere, the region from about 10 to 50 km (32,000 to 164,000 feet) above
Earth's surface. About 90% of the ozone in our atmosphere is contained in the
stratosphere. Ozone concentrations are greatest between about 20 and 40 km, where they
range from about 2 to 8 parts per million. If all of the ozone were compressed to the
pressure of the air at sea level, it would be only a few millimeters thick.

Ultraviolet light and ozone

Levels of ozone at various altitudes and blocking of ultraviolet radiation.

Although the concentration of the ozone in the ozone layer is very small, it is vitally
important to life because it absorbs biologically harmful ultraviolet (UV) radiation
emitted from the Sun. UV radiation is divided into three categories, based on its
wavelength; these are referred to as UV-A (400-315 nm), UV-B (315-280 nm), and UV-C
(280-100 nm). UV-C, which would be very harmful to humans, is entirely screened out
by ozone at around 35 km altitude. UV-B radiation can be harmful to the skin and is the
main cause of sunburn; excessive exposure can also cause genetic damage, resulting in
problems such as skin cancer. The ozone layer is very effective at screening out UV-B;
for radiation with a wavelength of 290 nm, the intensity at Earth's surface is 350 billion
times weaker than at the top of the atmosphere. Nevertheless, some UV-B reaches the
surface. Most UV-A reaches the surface; this radiation is significantly less harmful,
although it can potentially cause genetic damage.

Distribution of ozone in the stratosphere

The thickness of the ozone layer—that is, the total amount of ozone in a column overhead
—varies by a large factor worldwide, being in general smaller near the equator and larger
as one moves towards the poles. It also varies with season, being in general thicker
during the spring and thinner during the autumn in the northern hemisphere. The reasons
for this latitude and seasonal dependence are complicated, involving atmospheric
circulation patterns as well as solar intensity.

Since stratospheric ozone is produced by solar UV radiation, one might expect to find the
highest ozone levels over the tropics and the lowest over polar regions. The same
argument would lead one to expect the highest ozone levels in the summer and the lowest
in the winter. The observed behavior is very different: most of the ozone is found in the
mid-to-high latitudes of the northern and southern hemispheres, and the highest levels are
found in the spring, not summer, and the lowest in the autumn, not winter in the northern
hemisphere. During winter, the ozone layer actually increases in depth. This puzzle is
explained by the prevailing stratospheric wind patterns, known as the Brewer-Dobson
circulation. While most of the ozone is indeed created over the tropics, the stratospheric
circulation then transports it poleward and downward to the lower stratosphere of the
high latitudes. However in the southern hemisphere, owing to the ozone hole
phenomenon, the lowest amounts of column ozone found anywhere in the world are over
the Antarctic in the southern spring period of September and October.

Brewer-Dobson circulation in the ozone layer.

The ozone layer is higher in altitude in the tropics, and lower in altitude in the
extratropics, especially in the polar regions. This altitude variation of ozone results from
the slow circulation that lifts the ozone-poor air out of the troposphere into the
stratosphere. As this air slowly rises in the tropics, ozone is produced by the overhead sun
which photolyzes oxygen molecules. As this slow circulation bends towards the mid-
latitudes, it carries the ozone-rich air from the tropical middle stratosphere to the mid-
and-high latitudes lower stratosphere. The high ozone concentrations at high latitudes are
due to the accumulation of ozone at lower altitudes.
The Brewer-Dobson circulation moves very slowly. The time needed to lift an air parcel
from the tropical tropopause near 16 km (50,000 ft) to 20 km is about 4-5 months (about
30 feet (9.1 m) per day). Even though ozone in the lower tropical stratosphere is
produced at a very slow rate, the lifting circulation is so slow that ozone can build up to
relatively high levels by the time it reaches 26 km.

Ozone amounts over the continental United States (25°N to 49°N) are highest in the
northern spring (April and May). These ozone amounts fall over the course of the
summer to their lowest amounts in October, and then rise again over the course of the
winter. Again, wind transport of ozone is principally responsible for the seasonal
evolution of these higher latitude ozone patterns.

The total column amount of ozone generally increases as we move from the tropics to
higher latitudes in both hemispheres. However, the overall column amounts are greater in
the northern hemisphere high latitudes than in the southern hemisphere high latitudes. In
addition, while the highest amounts of column ozone over the Arctic occur in the
northern spring (March-April), the opposite is true over the Antarctic, where the lowest
amounts of column ozone occur in the southern spring (September-October). Indeed, the
highest amounts of column ozone anywhere in the world are found over the Arctic region
during the northern spring period of March and April. The amounts then decrease over
the course of the northern summer. Meanwhile, the lowest amounts of column ozone
anywhere in the world are found over the Antarctic in the southern
Satellite
In the context of spaceflight, a satellite is an object which has been placed into orbit by
human endeavor. Such objects are sometimes called artificial satellites to distinguish
them from natural satellites such as the Moon.

A full size model of the Earth observation satellite ERS 2

History
Early conceptions

The first fictional depiction of a satellite being launched into orbit is a short story by
Edward Everett Hale, The Brick Moon. The story is serialized in The Atlantic Monthly,
starting in 1869.[1][2] The idea surfaces again in Jules Verne's The Begum's Millions
(1879).

In 1903 Konstantin Tsiolkovsky (1857–1935) published The Exploration of Cosmic


Space by Means of Reaction Devices (in Russian: Исследование мировых пространств
реактивными приборами), which is the first academic treatise on the use of rocketry to
launch spacecraft. He calculated the orbital speed required for a minimal orbit around the
Earth at 8 km/s, and that a multi-stage rocket fueled by liquid propellants could be used
to achieve this. He proposed the use of liquid hydrogen and liquid oxygen, though other
combinations can be used.

In 1928 Slovenian Herman Potočnik (1892–1929) published his sole book, The Problem
of Space Travel — The Rocket Motor (German: Das Problem der Befahrung des
Weltraums — der Raketen-Motor), a plan for a breakthrough into space and a permanent
human presence there. He conceived of a space station in detail and calculated its
geostationary orbit. He described the use of orbiting spacecraft for detailed peaceful and
military observation of the ground and described how the special conditions of space
couldn't be useful for scientific experiments. The book described geostationary satellites
(first put forward by Tsiolkovsky) and discussed communication between them and the
ground using radio, but fell short of the idea of using satellites for mass broadcasting and
as telecommunications relays.

In a 1945 Wireless World article the English science fiction writer Arthur C. Clarke
(1917-2008) described in detail the possible use of communications satellites for mass
communications.[3] Clarke examined the logistics of satellite launch, possible orbits and
other aspects of the creation of a network of world-circling satellites, pointing to the
benefits of high-speed global communications. He also suggested that three geostationary
satellites would provide coverage over the entire planet.

History of artificial satellites


Further information: Timeline of artificial satellites and space probes
See also: Space Race

The first artificial satellite was Sputnik 1, launched by the Soviet Union on 4 October
1957, and initiating the Soviet Sputnik program, with Sergei Korolev as chief designer
and Kerim Kerimov as his assistant.[4] This in turn triggered the Space Race between the
Soviet Union and the United States.

Sputnik 1 helped to identify the density of high atmospheric layers through measurement
of its orbital change and provided data on radio-signal distribution in the ionosphere.
Because the satellite's body was filled with pressurized nitrogen, Sputnik 1 also provided
the first opportunity for meteoroid detection, as a loss of internal pressure due to
meteoroid penetration of the outer surface would have been evident in the temperature
data sent back to Earth. The unanticipated announcement of Sputnik 1's success
precipitated the Sputnik crisis in the United States and ignited the so-called Space Race
within the Cold War.

Sputnik 2 was launched on November 3, 1957 and carried the first living passenger into
orbit, a dog named Laika.[5]

In May, 1946, Project RAND had released the Preliminary Design of a Experimental
World-Circling Spaceship, which stated, "A satellite vehicle with appropriate
instrumentation can be expected to be one of the most potent scientific tools of the
Twentieth Century.[6] The United States had been considering launching orbital satellites
since 1945 under the Bureau of Aeronautics of the United States Navy. The United States
Air Force's Project RAND eventually released the above report, but did not believe that
the satellite was a potential military weapon; rather, they considered it to be a tool for
science, politics, and propaganda. In 1954, the Secretary of Defense stated, "I know of no
American satellite program."[7]
On July 29, 1955, the White House announced that the U.S. intended to launch satellites
by the spring of 1958. This became known as Project Vanguard. On July 31, the Soviets
announced that they intended to launch a satellite by the fall of 1957.

Following pressure by the American Rocket Society, the National Science Foundation,
and the International Geophysical Year, military interest picked up and in early 1955 the
Air Force and Navy were working on Project Orbiter, which involved using a Jupiter C
rocket to launch a satellite. The project succeeded, and Explorer 1 became the United
States' first satellite on January 31, 1958.[8]

In June 1961, three-and-a-half years after the launch of Sputnik 1, the Air Force used
resources of the United States Space Surveillance Network to catalog 115 Earth-orbiting
satellites.[9]

The largest artificial satellite currently orbiting the Earth is the International Space
Station.

Space Surveillance Network

The United States Space Surveillance Network (SSN) has been tracking space objects
since 1957 when the Soviets opened the space age with the launch of Sputnik I. Since
then, the SSN has tracked more than 26,000 space objects orbiting Earth. The SSN
currently tracks more than 8,000 man-made orbiting objects. The rest have re-entered
Earth's turbulent atmosphere and disintegrated, or survived re-entry and impacted the
Earth. The space objects now orbiting Earth range from satellites weighing several tons to
pieces of spent rocket bodies weighing only 10 pounds. About seven percent of the space
objects are operational satellites (i.e. ~560 satellites), the rest are space debris.[10]
USSTRATCOM is primarily interested in the active satellites, but also tracks space debris
which upon reentry might otherwise be mistaken for incoming missiles. The SSN tracks
space objects that are 10 centimeters in diameter (baseball size) or larger.

Non-Military Satellite Services

There are three basic categories of non-military satellite services:[11]

Fixed Satellite Service

Fixed satellite services handle hundreds of billions of voice, data, and video transmission
tasks across all countries and continents between certain points on the earth’s surface.

Mobile Satellite Systems

Mobile satellite systems help connect remote regions, vehicles, ships, people and aircraft
to other parts of the world and/or other mobile or stationary communications units, in
addition to serving as navigation systems.
Scientific Research Satellite (commercial and noncommercial)

Scientific research satellites provide us with meteorological information, land survey data
(e.g., remote sensing), Amateur (HAM) Radio, and other different scientific research
applications such as earth science, marine science, and atmospheric research.

Types

MILSTAR: A communication satellite

• Anti-Satellite weapons/"Killer Satellites" are satellites that are armed, designed


to take out enemy warheads, satellites, other space assets. They may have particle
weapons, energy weapons, kinetic weapons, nuclear and/or conventional missiles
and/or a combination of these weapons.
• Astronomical satellites are satellites used for observation of distant planets,
galaxies, and other outer space objects.
• Biosatellites are satellites designed to carry living organisms, generally for
scientific experimentation.
• Communications satellites are satellites stationed in space for the purpose of
telecommunications. Modern communications satellites typically use
geosynchronous orbits, Molniya orbits or Low Earth orbits.
• Miniaturized satellites are satellites of unusually low weights and small sizes.[12]
New classifications are used to categorize these satellites: minisatellite (500–
200 kg), microsatellite (below 200 kg), nanosatellite (below 10 kg).
• Navigational satellites are satellites which use radio time signals transmitted to
enable mobile receivers on the ground to determine their exact location. The
relatively clear line of sight between the satellites and receivers on the ground,
combined with ever-improving electronics, allows satellite navigation systems to
measure location to accuracies on the order of a few meters in real time.
• Reconnaissance satellites are Earth observation satellite or communications
satellite deployed for military or intelligence applications. Little is known about
the full power of these satellites, as governments who operate them usually keep
information pertaining to their reconnaissance satellites classified.
• Earth observation satellites are satellites intended for non-military uses such as
environmental monitoring, meteorology, map making etc. (See especially Earth
Observing System.)
• Space stations are man-made structures that are designed for human beings to
live on in outer space. A space station is distinguished from other manned
spacecraft by its lack of major propulsion or landing facilities — instead, other
vehicles are used as transport to and from the station. Space stations are designed
for medium-term living in orbit, for periods of weeks, months, or even years.
• Tether satellites are satellites which are connected to another satellite by a thin
cable called a tether.
• Weather satellites are primarily used to monitor Earth's weather and climate.[13]

Orbit types
Main article: List of orbits

Various earth orbits to scale; cyan represents low earth orbit, yellow represents medium
earth orbit, the black dashed line represents geosynchronous orbit, the green dash-dot line
the orbit of Global Positioning System (GPS) satellites, and the red dotted line the orbit
of the International Space Station (ISS).

The first satellite, Sputnik 1, was put into orbit around Earth and was therefore in
geocentric orbit. By far this is the most common type of orbit with approximately 2456
artificial satellites orbiting the Earth. Geocentric orbits may be further classified by their
altitude, inclination and eccentricity.

The commonly used altitude classifications are Low Earth Orbit (LEO), Medium Earth
Orbit (MEO) and High Earth Orbit (HEO). Low Earth orbit is any orbit below 2000 km,
and Medium Earth Orbit is any orbit higher than that but still below the altitude for
geosynchronous orbit at 35786 km. High Earth Orbit is any orbit higher than the altitude
for geosynchronous orbit.

Centric classifications

• Galactocentric orbit: An orbit about the center of a galaxy. Earth's sun follows
this type of orbit about the galactic center of the Milky Way.
• Heliocentric orbit: An orbit around the Sun. In our Solar System, all planets,
comets, and asteroids are in such orbits, as are many artificial satellites and pieces
of space debris. Moons by contrast are not in a heliocentric orbit but rather orbit
their parent planet.
• Geocentric orbit: An orbit around the planet Earth, such as the Moon or artificial
satellites. Currently there are approximately 2465 artificial satellites orbiting the
Earth.
• Areocentric orbit: An orbit around the planet Mars, such as moons or artificial
satellites.

Altitude classifications

• Low Earth Orbit (LEO): Geocentric orbits ranging in altitude from 0–2000 km
(0–1240 miles)
• Medium Earth Orbit (MEO): Geocentric orbits ranging in altitude from 2000
km (1240 miles) to just below geosynchronous orbit at 35786 km (22240 miles).
Also known as an intermediate circular orbit.
• High Earth Orbit (HEO): Geocentric orbits above the altitude of
geosynchronous orbit 35786 km (22240 miles).

Orbital Altitudes of several significant satellites of earth.

Inclination classifications

• Inclined orbit: An orbit whose inclination in reference to the equatorial plane is


not zero degrees.
o Polar orbit: An orbit that passes above or nearly above both poles of the
planet on each revolution. Therefore it has an inclination of (or very close
to) 90 degrees.
o Polar sun synchronous orbit: A nearly polar orbit that passes the equator
at the same local time on every pass. Useful for image taking satellites
because shadows will be nearly the same on every pass.

Eccentricity classifications

• Circular orbit: An orbit that has an eccentricity of 0 and whose path traces a
circle.
o Hohmann transfer orbit: An orbital maneuver that moves a spacecraft
from one circular orbit to another using two engine impulses. This
maneuver was named after Walter Hohmann.
• Elliptic orbit: An orbit with an eccentricity greater than 0 and less than 1 whose
orbit traces the path of an ellipse.
o Geosynchronous transfer orbit: An elliptic orbit where the perigee is at
the altitude of a Low Earth Orbit (LEO) and the apogee at the altitude of a
geosynchronous orbit.
o Geostationary transfer orbit: An elliptic orbit where the perigee is at the
altitude of a Low Earth Orbit (LEO) and the apogee at the altitude of a
geostationary orbit.
o Molniya orbit: A highly elliptic orbit with inclination of 63.4° and orbital
period of half of a sidereal day (roughly 12 hours). Such a satellite spends
most of its time over a designated area of the planet.
o Tundra orbit: A highly elliptic orbit with inclination of 63.4° and orbital
period of one sidereal day (roughly 24 hours). Such a satellite spends most
of its time over a designated area of the planet.
• Hyperbolic orbit: An orbit with the eccentricity greater than 1. Such an orbit also
has a velocity in excess of the escape velocity and as such, will escape the
gravitational pull of the planet and continue to travel infinitely.
• Parabolic orbit: An orbit with the eccentricity equal to 1. Such an orbit also has a
velocity equal to the escape velocity and therefore will escape the gravitational
pull of the planet and travel until its velocity relative to the planet is 0. If the
speed of such an orbit is increased it will become a hyperbolic orbit.
o Escape orbit (EO): A high-speed parabolic orbit where the object has
escape velocity and is moving away from the planet.
o Capture orbit: A high-speed parabolic orbit where the object has escape
velocity and is moving toward the planet.

Synchronous classifications

• Synchronous orbit: An orbit where the satellite has an orbital period equal to the
average rotational period (earth's is: 23 hours, 56 minutes, 4.091 seconds) of the
body being orbited and in the same direction of rotation as that body. To a ground
observer such a satellite would trace an analemma (figure 8) in the sky.
• Semi-synchronous orbit (SSO): An orbit with an altitude of approximately
20200 km (12544.2 miles) and an orbital period equal to one-half of the average
rotational period (earth's is approximately 12 hours) of the body being orbited
• Geosynchronous orbit (GEO): Orbits with an altitude of approximately 35786
km (22240 miles). Such a satellite would trace an analemma (figure 8) in the sky.
o Geostationary orbit (GSO): A geosynchronous orbit with an inclination
of zero. To an observer on the ground this satellite would appear as a fixed
point in the sky.[14]
 Clarke orbit: Another name for a geostationary orbit. Named after
scientist and writer Arthur C. Clarke.
o Supersynchronous orbit: A disposal / storage orbit above GSO/GEO.
Satellites will drift west. Also a synonym for Disposal orbit.
o Subsynchronous orbit: A drift orbit close to but below GSO/GEO.
Satellites will drift east.
o Graveyard orbit: An orbit a few hundred kilometers above
geosynchronous that satellites are moved into at the end of their operation.
 Disposal orbit: A synonym for graveyard orbit.
 Junk orbit: A synonym for graveyard orbit.
• Areosynchronous orbit: A synchronous orbit around the planet Mars with an
orbital period equal in length to Mars' sidereal day, 24.6229 hours.
• Areostationary orbit (ASO): A circular areosynchronous orbit on the equatorial
plane and about 17000 km(10557 miles) above the surface. To an observer on the
ground this satellite would appear as a fixed point in the sky.
• Heliosynchronous orbit: An heliocentric orbit about the Sun where the satellite's
orbital period matches the Sun's period of rotation. These orbits occur at a radius
of 24,360 Gm (0,1628 AU) around the Sun, a little less than half of the orbital
radius of Mercury.

Special classifications

• Sun-synchronous orbit: An orbit which combines altitude and inclination in such


a way that the satellite passes over any given point of the planets's surface at the
same local solar time. Such an orbit can place a satellite in constant sunlight and
is useful for imaging, spy, and weather satellites.
• Moon orbit: The orbital characteristics of earth's moon. Average altitude of
384403 kilometres (238857 mi), elliptical-inclined orbit.

Pseudo-orbit classifications

• Horseshoe orbit: An orbit that appears to a ground observer to be orbiting a


certain planet but is actually in co-orbit with the planet. See asteroids 3753
(Cruithne) and 2002 AA29.
• Exo-orbit: A maneuver where a spacecraft approaches the height of orbit but
lacks the velocity to sustain it.
o Suborbital spaceflight: A synonym for exo-orbit.
• Lunar transfer orbit (LTO)
• Prograde orbit: An orbit with an inclination of less than 90°. Or rather, an orbit
that is in the same direction as the rotation of the primary.
• Retrograde orbit: An orbit with an inclination of more than 90°. Or rather, an
orbit counter to the direction of rotation of the planet. Apart from those in sun-
synchronous orbit, few satellites are launched into retrograde orbit because the
quantity of fuel required to launch them is much greater than for a prograde orbit.
This is because when the rocket starts out on the ground, it already has an
eastward component of velocity equal to the rotational velocity of the planet at its
launch latitude.
• Halo orbit and Lissajous orbit: Orbits "around" Lagrangian points.

Satellite Modules

The satellite’s functional versatility is imbedded within its technical components and its
operations characteristics. Looking at the “anatomy” of a typical satellite, one discovers
two modules.[11] Note that some novel architectural concepts such as Fractionated
Spacecraft somewhat upset this taxonomy.
Spacecraft bus or service module

This bus module consist of the following subsystems:

• The Structural Subsystems

The structural subsystem provides the mechanical base structure, shields the satellite
from extreme temperature changes and micro-meteorite damage, and controls the
satellite’s spin functions.

• The Telemetry Subsystems

The telemetry subsystem monitors the on-board equipment operations, transmits


equipment operation data to the earth control station, and receives the earth control
station’s commands to perform equipment operation adjustments.

• The Power Subsystems

The power subsystem consists of solar panels and backup batteries that generate power
when the satellite passes into the earth’s shadow. Nuclear power sources (Radioisotope
thermoelectric generators) have been used in several successful satellite programs
including the Nimbus program (1964-1978).[15]

• The Thermal Control Subsystems

The thermal control subsystem helps protect electronic equipment from extreme
temperatures due to intense sunlight or the lack of sun exposure on different sides of the
satellite’s body (e.g. Optical Solar Reflector)

• The Attitude and Orbit Controlled Control Subsystems

Main article: Attitude control

The attitude and orbit controlled subsystem consists of small rocket thrusters that keep
the satellite in the correct orbital position and keep antennas positioning in the right
directions.

Communication Payload

The second major module is the communication payload, which is made up of


transponders. A transponders is capable of :

• Receiving uplinked radio signals from earth satellite transmission stations


(antennas).
• Amplifying received radio signals
• Sorting the input signals and directing the output signals through input/output
signal multiplexers to the proper downlink antennas for retransmission to earth
satellite receiving stations (antennas).

Launch-capable countries
Main article: Timeline of first orbital launches by nationality

Launch of the first British Skynet military satellite.

This list includes countries with an independent capability to place satellites in orbit,
including production of the necessary launch vehicle. Note: many more countries have
the capability to design and build satellites — which relatively speaking, does not require
much economic, scientific and industrial capacity — but are unable to launch them,
instead relying on foreign launch services. This list does not consider those numerous
countries, but only lists those capable of launching satellites indigenously, and the date
this capability was first demonstrated. Does not include consortium satellites or multi-
national satellites.

First launch by country


Order Country Year of first launch Rocket Satellite
1 Soviet Union 1957 Sputnik-PS Sputnik 1
2 United States 1958 Juno I Explorer 1
3 Canada 1962 Thor-Agena Alouette 1
4 France 1965 Diamant Astérix
5 Japan 1970 Lambda-4S Ōsumi
6 China 1970 Long March 1 Dong Fang Hong I
United
7 1971 Black Arrow Prospero X-3
Kingdom
8 India 1980 SLV Rohini
9 Israel 1988 Shavit Ofeq 1
— Russia[1] 1992 Soyuz-U Kosmos-2175
— Ukraine[1] 1992 Tsyklon-3 Strela (x3, Russian)
10 Iran 2009 Safir-2 Omid

Notes

1. Russia and Ukraine inherited launch capability from the Soviet Union rather than
developing it indigenously.
2. France, United Kingdom launched their first satellites by own launchers from
foreign spaceports.
3. North Korea (1998) and Iraq (1989) have claimed orbital launches (satellite and
warhead accordingly), but these claims are unconfirmed.
4. In addition to the above, countries such as South Africa, Spain, Italy, Germany,
Canada, Australia, Argentina, Egypt and private companies such as OTRAG,
have developed their own launchers, but have not had a successful launch.
5. As of 2009, only eight countries from the list above ( Russia and Ukraine instead
of USSR, also USA, Japan, China, India, Israel, and Iran) and one regional
organization (the European Space Agency, ESA) have independently launched
satellites on their own indigenously developed launch vehicles. (The launch
capabilities of the United Kingdom and France now fall under the ESA.)
6. Several other countries, including South Korea, Brazil, Pakistan, Romania,
Taiwan, Indonesia, Kazakhstan, Australia, Malaysia[citation needed] and Turkey, are at
various stages of development of their own small-scale launcher capabilities.
7. It is scheduled that in summer or autumn of 2009 South Korea will launch a
KSLV rocket (created with assistance of Russia).
8. North Korea claimed a launch in April 2009, but U.S. and South Korean defense
officials and weapons experts later reported that the rocket failed to send a
satellite into orbit, if that was the goal. [16][17] It is believed that what has been done
was an attempt to test a ballistic missile rocket rather than launch a satellite into
orbit and even the ballistic missile test was a failure.

Launch capable private entities

On September 28, 2008, the private aerospace firm SpaceX successfully launched its
Falcon 1 rocket in to orbit. This marked the first time that a privately built liquid-fueled
booster was able to reach orbit.[18] The rocket carried a prism shaped 1.5 m (5 ft) long
payload mass simulator that was set into orbit. The dummy satellite, known as Ratsat,
will remain in orbit for between five and ten years before burning up in the
atmosphere.[18]

Countries who have launched satellites with the aid of others


First launch by country including help of other parties[19]
Year of first Payloads in orbit in
Country First satellite
launch 2008[20]
Soviet Union 1957 Sputnik 1 1,398
( Russia) (1992) (Cosmos-2175)
United States 1958 Explorer 1 1,042
Canada 1962 Alouette 1 25
Italy 1964 San Marco 1 14
France 1965 Astérix 44
Australia 1967 WRESAT 11
Germany 1969 Azur 27
Japan 1970 Ōsumi 111
China 1970 Dong Fang Hong I 64
United Kingdom 1971 Prospero X-3 25
Intercosmos
Poland 1973 ?
Kopernikus 500
Netherlands 1974 ANS 5
Spain 1974 Intasat 9
India 1975 Aryabhata 34
Indonesia 1976 Palapa A1 10
Czechoslovakia 1978 Magion 1 5
Intercosmos Bulgaria
Bulgaria 1981
1300
Brazil 1985 Brasilsat A1 11
Mexico 1985 Morelos 1 7
Sweden 1986 Viking 11
Israel 1988 Ofeq 1 7
Luxembourg 1988 Astra 1A 15
Argentina 1990 Lusat 10
Pakistan 1990 Badr-1 5
South Korea 1992 Kitsat A 10
Portugal 1993 PoSAT-1 1
Thailand 1993 Thaicom 1 6
Turkey 1994 Turksat 1B 5
Ukraine 1995 Sich-1 6
Chile 1995 FASat-Alfa 1
Malaysia 1996 MEASAT 4
Norway 1997 Thor 2 3
Philippines 1997 Mabuhay 1 2
Egypt 1998 Nilesat 101 3
Singapore 1998 ST-1 1
Taiwan 1999 ROCSAT-1
Denmark 1999 Ørsted 3
South Africa 1999 SUNSAT 1
Saudi Arabia 2000 Saudisat 1A 12
United Arab 2000 Thuraya 1 3
Emirates
Morocco 2001 Maroc-Tubsat 1
Algeria 2002 Alsat 1 1
Greece 2003 Hellas Sat 2 2
Nigeria 2003 Nigeriasat 1 2
Iran 2005 Sina-1 4
Kazakhstan 2006 KazSat 1 1
Belarus 2006 BelKA 1
Colombia 2007 Libertad 1 1
Vietnam 2008 VINASAT-1 1
Venezuela 2008 Venesat-1 1

While Canada was the third country to build a satellite which was launched into space,[21]
it was launched aboard a U.S. rocket from a U.S. spaceport. The same goes for Australia,
who launched on-board a donated Redstone rocket. The first Italian-launched was San
Marco 1, launched on 15 December 1964 on a U.S. Scout rocket from Wallops Island
(VA,USA) with an Italian Launch Team trained by NASA.[22] Australia's launch project
(WRESAT) involved a donated U.S. missile and U. S. support staff as well as a joint
launch facility with the United Kingdom.[23]

Attacks on satellites
For more details on this topic, see Anti-satellite weapon.

In recent times satellites have been hacked by militant organizations to broadcast


propaganda and to pilfer classified information from military communication
networks.[24][25]

Satellites in low earth orbit have been destroyed by ballistic missiles launched from earth.
Russia, the United States and China have demonstrated the ability to eliminate
satellites.[26] In 2007 the Chinese military shot down an aging weather satellite,[26]
followed by the US Navy shooting down a defunct spy satellite in February 2008.[27]

Jamming

Due to the low received signal strength of satellite transmissions they are prone to
jamming by land-based transmitters. Such jamming is limited to the geographical area
within the transmitter's range. GPS satellites are potential targets for jamming,[28][29] but
satellite phone and television signals have also been subjected to jamming.[30][31] It is
trivial to transmit a carrier to a geostationary satellite and thus interfere with any other
users of the transponder. It is common on commercial satellite space for earth stations to
transmit at the wrong time or on the wrong frequency and dual illuminate the transponder
rendering the frequency unusable. Satellite operators now have sophisticated monitoring
that enables them to pin point the source of any carrier and manage the xponder space
effectively.
Satellite Services

• Satellite Internet access


• Satellite phone
• Satellite radio
• Satellite television
• Satellite navigation
Vacuum
A vacuum is a volume of space that is essentially empty of matter, such that its gaseous
pressure is much less than atmospheric pressure.[1] The word comes from the Latin term
for "empty," but in reality, no volume of space can ever be perfectly empty. A perfect
vacuum with a gaseous pressure of absolute zero is a philosophical concept that is never
observed in practice. Physicists often discuss ideal test results that would occur in a
perfect vacuum, which they simply call "vacuum" or "free space" in this context, and use
the term partial vacuum to refer to real vacuum. The Latin term in vacuo is also used to
describe an object as being in what would otherwise be a vacuum.

The quality of a vacuum refers to how closely it approaches a perfect vacuum. The
residual gas pressure is the primary indicator of quality, and is most commonly measured
in units called torr, even in metric contexts. Lower pressures
indicate higher quality, although other variables must also be taken
into account. Quantum theory sets limits for the best possible
quality of vacuum, predicting that no volume of space can be
perfectly empty. Outer space is a natural high quality vacuum,
mostly of much higher quality than can be created artificially with
current technology. Low quality artificial vacuums have been used
for suction for many years.

Vacuum has been a frequent topic of philosophical debate since


Ancient Greek times, but was not studied empirically until the 17th
century. Evangelista Torricelli produced the first laboratory vacuum in 1643, and other
experimental techniques were developed as a result of his theories of atmospheric
pressure. A torricellian vacuum is created by filling a tall glass container closed at one
end with mercury and then inverting the container into a bowl to contain the mercury.[2]

Vacuum became a valuable industrial tool in the 20th century with the introduction of
incandescent light bulbs and vacuum tubes, and a wide array of vacuum technology has
since become available. The recent development of human spaceflight has raised interest
in the impact of vacuum on human health, and on life forms in general.

Etymology

From Latin vacuum (an empty space, void) noun use of neuter of vacuus (empty) related
to vacare (be empty). It is one of the few words in the English language to have the letter
combination of uu.
Uses

Light bulbs contain a partial vacuum, usually backfilled with argon, which protects the
tungsten filament

Vacuum is useful in a variety of processes and devices. Its first widespread use was in the
incandescent light bulb to protect the filament from chemical degradation. Its chemical
inertness is also useful for electron beam welding, cold welding, vacuum packing and
vacuum frying. Ultra-high vacuum is used in the study of atomically clean substrates, as
only a very good vacuum preserves atomic-scale clean surfaces for a reasonably long
time (on the order of minutes to days). High to ultra-high vacuum removes the
obstruction of air, allowing particle beams to deposit or remove materials without
contamination. This is the principle behind chemical vapor deposition, physical vapor
deposition, and dry etching which are essential to the fabrication of semiconductors and
optical coatings, and to surface science. The reduction of convection provides the thermal
insulation of thermos bottles. Deep vacuum promotes outgassing which is used in freeze
drying, adhesive preparation, distillation, metallurgy, and process purging. The electrical
properties of vacuum make electron microscopes and vacuum tubes possible, including
cathode ray tubes. The elimination of air friction is useful for flywheel energy storage and
ultracentrifuges.

Vacuum driven machines

Vacuums are commonly used to produce suction, which has an even wider variety of
applications. The Newcomen steam engine used vacuum instead of pressure to drive a
piston. In the 19th century, vacuum was used for traction on Isambard Kingdom Brunel's
experimental atmospheric railway. Vacuum brakes were once widely used on trains in the
UK but, except on heritage railways, they have been replaced by air brakes.
Manifold vacuum can be used to drive accessories on automobiles. The best-known
application is the vacuum servo, used to provide power assistance for the brakes.
Obsolete applications include vacuum-driven windscreen wipers and fuel pumps.

Outer space
Main article: Outer space

Outer space is not a perfect vacuum, but a tenuous plasma awash with charged particles,
electromagnetic fields, and the occasional star.

Outer space has very low density and pressure, and is the closest physical approximation
of a perfect vacuum. It has effectively no friction, allowing stars, planets and moons to
move freely along ideal gravitational trajectories. But no vacuum is truly perfect, not
even in interstellar space, where there are still a few hydrogen atoms per cubic
centimeter.

Stars, planets and moons keep their atmospheres by gravitational attraction, and as such,
atmospheres have no clearly delineated boundary: the density of atmospheric gas simply
decreases with distance from the object. The Earth's atmospheric pressure drops to about
1 Pa (10-3 torr) at 100 km of altitude, the Kármán line which is a common definition of
the boundary with outer space. Beyond this line, isotropic gas pressure rapidly becomes
insignificant when compared to radiation pressure from the sun and the dynamic pressure
of the solar wind, so the definition of pressure becomes difficult to interpret. The
thermosphere in this range has large gradients of pressure, temperature and composition,
and varies greatly due to space weather. Astrophysicists prefer to use number density to
describe these environments, in units of particles per cubic centimetre.

But although it meets the definition of outer space, the atmospheric density within the
first few hundred kilometers above the Kármán line is still sufficient to produce
significant drag on satellites. Most artificial satellites operate in this region called low
earth orbit and must fire their engines every few days to maintain orbit. The drag here is
low enough that it could theoretically be overcome by radiation pressure on solar sails, a
proposed propulsion system for interplanetary travel. Planets are too massive for their
trajectories to be affected by these forces, although their atmospheres are eroded by the
solar winds.

All of the observable universe is filled with large numbers of photons, the so-called
cosmic background radiation, and quite likely a correspondingly large number of
neutrinos. The current temperature of this radiation is about 3 K, or -270 degrees Celsius
or -454 degrees Fahrenheit.

Effects on humans and animals


See also: Human adaptation to space

This painting, An Experiment on a Bird in the Air Pump by Joseph Wright of Derby,
1768, depicts an experiment performed by Robert Boyle in 1660.

Humans and animals exposed to vacuum will lose consciousness after a few seconds and
die of hypoxia within minutes, but the symptoms are not nearly as graphic as commonly
shown in pop culture. Blood and other body fluids do boil when their pressure drops
below 6.3 kPa, (47 torr) the vapour pressure of water at body temperature.[3] This
condition is called ebullism. The steam may bloat the body to twice its normal size and
slow circulation, but tissues are elastic and porous enough to prevent rupture. Ebullism is
slowed by the pressure containment of blood vessels, so some blood remains liquid.[4][5]
Swelling and ebullism can be restrained by containment in a flight suit. Shuttle astronauts
wear a fitted elastic garment called the Crew Altitude Protection Suit (CAPS) which
prevents ebullism at pressures as low as 2 kPa (15 torr).[6] Rapid evaporative cooling of
the skin will create frost, particularly in the mouth, but this is not a significant hazard.

Animal experiments show that rapid and complete recovery is normal for exposures
shorter than 90 seconds, while longer full-body exposures are fatal and resuscitation has
never been successful.[7] There is only a limited amount of data available from human
accidents, but it is consistent with animal data. Limbs may be exposed for much longer if
breathing is not impaired.[3] Robert Boyle was the first to show in 1660 that vacuum is
lethal to small animals.

During 1942, in one of a series of experiments on human subjects for the Luftwaffe, the
Nazi regime experimented on prisoners in Dachau concentration camp by exposing them
to low pressure.
Cold or oxygen-rich atmospheres can sustain life at pressures much lower than
atmospheric, as long as the density of oxygen is similar to that of standard sea-level
atmosphere. The colder air temperatures found at altitudes of up to 3 km generally
compensate for the lower pressures there.[3] Above this altitude, oxygen enrichment is
necessary to prevent altitude sickness, and spacesuits are necessary to prevent ebullism
above 19 km.[3] Most spacesuits use only 20 kPa (150 torr) of pure oxygen, just enough to
sustain full consciousness. This pressure is high enough to prevent ebullism, but simple
evaporation of blood can still cause decompression sickness and gas embolisms if not
managed.

Rapid decompression can be much more dangerous than vacuum exposure itself. Even if
the victim does not hold his breath, venting through the windpipe may be too slow to
prevent the fatal rupture of the delicate alveoli of the lungs.[3] Eardrums and sinuses may
be ruptured by rapid decompression, soft tissues may bruise and seep blood, and the
stress of shock will accelerate oxygen consumption leading to hypoxia.[8] Injuries caused
by rapid decompression are called barotrauma. A pressure drop as small as 13 kPa (100
torr), which produces no symptoms if it is gradual, may be fatal if occurs suddenly.[3]

Some extremophile microrganisms, such as Tardigrades, can survive vacuum for a period
of days(http://en.wikipedia.org/wiki/Tardigrade).

Historical interpretation

Historically, there has been much dispute over whether such a thing as a vacuum can
exist. Ancient Greek philosophers did not like to admit the existence of a vacuum, asking
themselves "how can 'nothing' be something?". Plato found the idea of a vacuum
inconceivable. He believed that all physical things were instantiations of an abstract
Platonic ideal, and he could not conceive of an "ideal" form of a vacuum. Similarly,
Aristotle considered the creation of a vacuum impossible — nothing could not be
something. Later Greek philosophers thought that a vacuum could exist outside the
cosmos, but not within it. Hero of Alexandria was the first to challenge this belief in the
first century AD, but his attempts to create an artificial vacuum failed.[9]

In the medieval Islamic world, the Muslim physicist and philosopher, Al-Farabi
(Alpharabius, 872-950), conducted a small experiment concerning the existence of
vacuum, in which he investigated handheld plungers in water.[10] He concluded that air's
volume can expand to fill available space, and he suggested that the concept of perfect
vacuum was incoherent.[11] However, the Muslim physicist Ibn al-Haytham (Alhazen,
965-1039) and the Mu'tazili theologians disagreed with Aristotle and Al-Farabi, and they
supported the existence of a void. Using geometry, Ibn al-Haytham mathematically
demonstrated that place (al-makan) is the imagined three-dimensional void between the
inner surfaces of a containing body.[12] Abū Rayhān al-Bīrūnī also states that "there is no
observable evidence that rules out the possibility of vacuum".[13] The first suction pump
was invented in 1206 by the Muslim engineer and inventor, Al-Jazari. The suction pump
later appeared in Europe from the 15th century.[14][15][16] Taqi al-Din's six-cylinder
'Monobloc' pump, invented in 1551, could also create a partial vacuum, which was
formed "as the lead weight moves upwards, it pulls the piston with it, creating vacuum
which sucks the water through a non return clack valve into the piston cylinder."[17]

Torricelli's mercury barometer produced one of the first sustained vacuums in a


laboratory.

In medieval Europe, the Catholic Church held the idea of a vacuum to be immoral or
even heretical. The absence of anything implied the absence of God, and harkened back
to the void prior to the creation story in the book of Genesis. Medieval thought
experiments into the idea of a vacuum considered whether a vacuum was present, if only
for an instant, between two flat plates when they were rapidly separated. There was much
discussion of whether the air moved in quickly enough as the plates were separated, or, as
Walter Burley postulated, whether a 'celestial agent' prevented the vacuum arising. The
commonly held view that nature abhorred a vacuum was called horror vacui. This
speculation was shut down by the 1277 Paris condemnations of Bishop Etienne Tempier,
which required there to be no restrictions on the powers of God, which led to the
conclusion that God could create a vacuum if he so wished.[18] René Descartes also
argued against the existence of a vacuum, arguing along the following lines:“Space is
identical with extension, but extension is connected with bodies; thus there is no space
without bodies and hence no empty space (vacuum)”. In spite of this, opposition to the
idea of a vacuum existing in nature continued into the Scientific Revolution, with
scholars such as Paolo Casati taking an anti-vacuist position. Jean Buridan reported in the
14th century that teams of ten horses could not pull open bellows when the port was
sealed, apparently because of horror vacui.[9]
The Crookes tube, used to discover and study cathode rays, was an evolution of the
Geissler tube.

The belief in horror vacui was overthrown in the 17th century. Water pump designs had
improved by then to the point that they produced measurable vacuums, but this was not
immediately understood. What was known was that suction pumps could not pull water
beyond a certain height: 18 Florentine yards according to a measurement taken around
1635. (The conversion to metres is uncertain, but it would be about 9 or 10 metres.) This
limit was a concern to irrigation projects, mine drainage, and decorative water fountains
planned by the Duke of Tuscany, so the Duke commissioned Galileo to investigate the
problem. Galileo advertised the puzzle to other scientists, including Gasparo Berti who
replicated it by building the first water barometer in Rome in 1639.[19] Berti's barometer
produced a vacuum above the water column, but he could not explain it. The
breakthrough was made by Evangelista Torricelli in 1643. Building upon Galileo's notes,
he built the first mercury barometer and wrote a convincing argument that the space at the
top was a vacuum. The height of the column was then limited to the maximum weight
that atmospheric pressure could support. Some people believe that although Torricelli's
experiment was crucial, it was Blaise Pascal's experiments that proved the top space
really contained vacuum.

In 1654, Otto von Guericke invented the first vacuum pump and conducted his famous
Magdeburg hemispheres experiment, showing that teams of horses could not separate two
hemispheres from which the air had been(partially) evacuated. Robert Boyle improved
Guericke's design and conducted experiments on the properties of vacuum. Robert Hooke
also helped Boyle produce an air pump which helped to produce the vacuum. The study
of vacuum then lapsed until 1850 when August Toepler invented the Toepler Pump. Then
in 1855 Heinrich Geissler invented the mercury displacement pump and achieved a
record vacuum of about 10 Pa (0.1 torr). A number of electrical properties become
observable at this vacuum level, and this renewed interest in vacuum. This, in turn, led to
the development of the vacuum tube. Shortly after this Hermann Sprengel invented the
Sprengel Pump in 1865.

While outer space has been likened to a vacuum, early theories of the nature of light
relied upon the existence of an invisible, aetherial medium which would convey waves of
light. (Isaac Newton relied on this idea to explain refraction and radiated heat).[20] This
evolved into the luminiferous aether of the 19th century, but the idea was known to have
significant shortcomings - specifically, that if the Earth were moving through a material
medium, the medium would have to be both extremely tenuous (because the Earth is not
detectably slowed in its orbit), and extremely rigid (because vibrations propagate so
rapidly). An 1891 article by William Crookes noted: "the [freeing of] occluded gases into
the vacuum of space".[21] Even up until 1912, astronomer Henry Pickering commented:
"While the interstellar absorbing medium may be simply the ether, [it] is characteristic of
a gas, and free gaseous molecules are certainly there".[22]

In 1887, the Michelson-Morley experiment, using an interferometer to attempt to detect


the change in the speed of light caused by the Earth moving with respect to the aether,
was a famous null result, showing that there really was no static, pervasive medium
throughout space and through which the Earth moved as though through a wind. While
there is therefore no aether, and no such entity is required for the propagation of light,
space between the stars is not completely empty. Besides the various particles which
comprise cosmic radiation, there is a cosmic background of photonic radiation (light),
including the thermal background at about 2.7 K, seen as a relic of the Big Bang. None of
these findings affect the outcome of the Michelson-Morley experiment to any significant
degree.

Einstein argued that physical objects are not located in space, but rather have a spatial
extent. Seen this way, the concept of empty space loses its meaning.[23] Rather, space is an
abstraction, based on the relationships between local objects. Nevertheless, the general
theory of relativity admits a pervasive gravitational field, which, in Einstein's words[24],
may be regarded as an "aether", with properties varying from one location to another.
One must take care, though, to not ascribe to it material properties such as velocity and so
on.

In 1930, Paul Dirac proposed a model of vacuum as an infinite sea of particles possessing
negative energy, called the Dirac sea. This theory helped refine the predictions of his
earlier formulated Dirac equation, and successfully predicted the existence of the
positron, discovered two years later in 1932. Despite this early success, the idea was soon
abandoned in favour of the more elegant quantum field theory.

The development of quantum mechanics has complicated the modern interpretation of


vacuum by requiring indeterminacy. Niels Bohr and Werner Heisenberg's uncertainty
principle and Copenhagen interpretation, formulated in 1927, predict a fundamental
uncertainty in the instantaneous measurability of the position and momentum of any
particle, and which, not unlike the gravitational field, questions the emptiness of space
between particles. In the late 20th century, this principle was understood to also predict a
fundamental uncertainty in the number of particles in a region of space, leading to
predictions of virtual particles arising spontaneously out of the void. In other words, there
is a lower bound on the vacuum, dictated by the lowest possible energy state of the
quantized fields in any region of space.
Quantum-mechanical definition
For more details on this topic, see vacuum state.

In quantum mechanics, the vacuum is defined as the state (i.e. solution to the equations of
the theory) with the lowest energy. To first approximation, this is simply a state with no
particles, hence the name.

Even an ideal vacuum, thought of as the complete absence of anything, will not in
practice remain empty. Consider a vacuum chamber that has been completely evacuated,
so that the (classical) particle concentration is zero. The walls of the chamber will emit
light in the form of black body radiation. This light carries momentum, so the vacuum
does have a radiation pressure. This limitation applies even to the vacuum of interstellar
space. Even if a region of space contains no particles, the cosmic microwave background
fills the entire universe with black body radiation.

An ideal vacuum cannot exist even inside of a molecule. Each atom in the molecule
exists as a probability function of space, which has a certain non-zero value everywhere
in a given volume. Thus, even "between" the atoms there is a certain probability of
finding a particle, so the space cannot be said to be a vacuum.

More fundamentally, quantum mechanics predicts that vacuum energy will be different
from its naive, classical value. The quantum correction to the energy is called the zero-
point energy and consists of energies of virtual particles that have a brief existence. This
is called vacuum fluctuation. Vacuum fluctuations may also be related to the so-called
cosmological constant in cosmology. The best evidence for vacuum fluctuations is the
Casimir effect and the Lamb shift.[18]

In quantum field theory and string theory, the term "vacuum" is used to represent the
ground state in the Hilbert space, that is, the state with the lowest possible energy. In free
(non-interacting) quantum field theories, this state is analogous to the ground state of a
quantum harmonic oscillator. If the theory is obtained by quantization of a classical
theory, each stationary point of the energy in the configuration space gives rise to a single
vacuum. String theory is believed to have a huge number of vacua - the so-called string
theory landscape.
Pumping

The manual water pump draws water up from a well by creating a vacuum that water
rushes in to fill. However, although a vacuum is responsible for creating the negative
pressure that draws up the water, the vacuum itself quickly disintegrates due to the weak
pressure on the other side of the pump created by the porous and easily saturated dirt.
Main article: Vacuum pump

Fluids cannot be pulled, so it is technically impossible to create a vacuum by suction.


Suction can spread and dilute a vacuum by letting a higher pressure push fluids into it,
but the vacuum has to be created first before suction can occur. The easiest way to create
an artificial vacuum is to expand the volume of a container. For example, the diaphragm
muscle expands the chest cavity, which causes the volume of the lungs to increase. This
expansion reduces the pressure and creates a partial vacuum, which is soon filled by air
pushed in by atmospheric pressure.

To continue evacuating a chamber indefinitely without requiring infinite growth, a


compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again.
This is the principle behind positive displacement pumps, like the manual water pump for
example. Inside the pump, a mechanism expands a small sealed cavity to create a
vacuum. Because of the pressure differential, some fluid from the chamber (or the well,
in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed
from the chamber, opened to the atmosphere, and squeezed back to a minute size.

A cutaway view of a turbomolecular pump, a momentum transfer pump used to achieve


high vacuum
The above explanation is merely a simple introduction to
vacuum pumping, and is not representative of the entire range of pumps in use. Many
variations of the positive displacement pump have been developed, and many other pump
designs rely on fundamentally different principles. Momentum transfer pumps, which
bear some similarities to dynamic pumps used at higher pressures, can achieve much
higher quality vacuums than positive displacement pumps. Entrapment pumps can
capture gases in a solid or absorbed state, often with no moving parts, no seals and no
vibration. None of these pumps are universal; each type has important performance
limitations. They all share a difficulty in pumping low molecular weight gases, especially
hydrogen, helium, and neon.

The lowest pressure that can be attained in a system is also dependent on many things
other than the nature of the pumps. Multiple pumps may be connected in series, called
stages, to achieve higher vacuums. The choice of seals, chamber geometry, materials, and
pump-down procedures will all have an impact. Collectively, these are called vacuum
technique. And sometimes, the final pressure is not the only relevant characteristic.
Pumping systems differ in oil contamination, vibration, preferential pumping of certain
gases, pump-down speeds, intermittent duty cycle, reliability, or tolerance to high leakage
rates.

In ultra high vacuum systems, some very "odd" leakage paths and outgassing sources
must be considered. The water absorption of aluminium and palladium becomes an
unacceptable source of outgassing, and even the adsorptivity of hard metals such as
stainless steel or titanium must be considered. Some oils and greases will boil off in
extreme vacuums. The permeability of the metallic chamber walls may have to be
considered, and the grain direction of the metallic flanges should be parallel to the flange
face.

The lowest pressures currently achievable in laboratory are about 10-13 torr.[25] However,
pressures as low as 5×10-17 torr have been indirectly measured in a 4 K cryogenic vacuum
system.[26]
Outgassing
Main article: Outgassing

Evaporation and sublimation into a vacuum is called outgassing. All materials, solid or
liquid, have a small vapour pressure, and their outgassing becomes important when the
vacuum pressure falls below this vapour pressure. In man-made systems, outgassing has
the same effect as a leak and can limit the achievable vacuum. Outgassing products may
condense on nearby colder surfaces, which can be troublesome if they obscure optical
instruments or react with other materials. This is of great concern to space missions,
where an obscured telescope or solar cell can ruin an expensive mission.

The most prevalent outgassing product in man-made vacuum systems is water absorbed
by chamber materials. It can be reduced by desiccating or baking the chamber, and
removing absorbent materials. Outgassed water can condense in the oil of rotary vane
pumps and reduce their net speed drastically if gas ballasting is not used. High vacuum
systems must be clean and free of organic matter to minimize outgassing.

Ultra-high vacuum systems are usually baked, preferably under vacuum, to temporarily
raise the vapour pressure of all outgassing materials and boil them off. Once the bulk of
the outgassing materials are boiled off and evacuated, the system may be cooled to lower
vapour pressures and minimize residual outgassing during actual operation. Some
systems are cooled well below room temperature by liquid nitrogen to shut down residual
outgassing and simultaneously cryopump the system.

Quality

The quality of a vacuum is indicated by the amount of matter remaining in the system, so
that a high quality vacuum is one with very little matter left in it. Vacuum is primarily
measured by its absolute pressure, but a complete characterization requires further
parameters, such as temperature and chemical composition. One of the most important
parameters is the mean free path (MFP) of residual gases, which indicates the average
distance that molecules will travel between collisions with each other. As the gas density
decreases, the MFP increases, and when the MFP is longer than the chamber, pump,
spacecraft, or other objects present, the continuum assumptions of fluid mechanics do not
apply. This vacuum state is called high vacuum, and the study of fluid flows in this
regime is called particle gas dynamics. The MFP of air at atmospheric pressure is very
short, 70 nm, but at 100 mPa (~1×10-3 torr) the MFP of room temperature air is roughly
100 mm, which is on the order of everyday objects such as vacuum tubes. The Crookes
radiometer turns when the MFP is larger than the size of the vanes.

Vacuum quality is subdivided into ranges according to the technology required to achieve
it or measure it. These ranges do not have universally agreed definitions, but a typical
distribution is as follows:[27][28]

Atmospheric pressure 760 torr 101.3 kPa


Low vacuum 760 to 25 torr 100 to 3 kPa
Medium vacuum 25 to 1×10-3 torr 3 kPa to 100 mPa
High vacuum 1×10-3 to 1×10-9 torr 100 mPa to 100 nPa
Ultra high vacuum 1×10-9 to 1×10-12 torr 100 nPa to 100 pPa
Extremely high vacuum <1×10-12 torr <100 pPa
Outer Space 1×10-6 to <3×10-17 torr 100 µPa to <3fPa
Perfect vacuum 0 torr 0 Pa

• Atmospheric pressure is variable but standardized at 101.325 kPa (760 torr)


• Low vacuum, also called rough vacuum or coarse vacuum, is vacuum that can be
achieved or measured with rudimentary equipment such as a vacuum cleaner and
a liquid column manometer.
• Medium vacuum is vacuum that can be achieved with a single pump, but is too
low to measure with a liquid or mechanical manometer. It can be measured with a
McLeod gauge, thermal gauge or a capacitive gauge.
• High vacuum is vacuum where the MFP of residual gases is longer than the size
of the chamber or of the object under test. High vacuum usually requires multi-
stage pumping and ion gauge measurement. Some texts differentiate between high
vacuum and very high vacuum.
• Ultra high vacuum requires baking the chamber to remove trace gases, and other
special procedures. British and German standards define ultra high vacuum as
pressures below 10-6 Pa (10-8 torr).[29][30]
• Deep space is generally much more empty than any artificial vacuum. It may or
may not meet the definition of high vacuum above, depending on what region of
space and astronomical bodies are being considered. For example, the MFP of
interplanetary space is smaller than the size of the solar system, but larger than
small planets and moons. As a result, solar winds exhibit continuum flow on the
scale of the solar system, but must be considered as a bombardment of particles
with respect to the Earth and Moon.
• Perfect vacuum is an ideal state that cannot be obtained in a laboratory, nor can it
be found or obtained anywhere else in the universe, apart from possibly the
singularity of a black hole.

Examples
pressure in mean free molecules per
pressure in Pa
torr path cm3
approximately 80
Vacuum cleaner 600 70 nm 1019
kPa
liquid ring vacuum approximately 3.2
24
pump kPa
freeze drying 100 to 10 Pa 1 to 0.1 100μm 1016
100μm to
rotary vane pump 100 Pa to 100 mPa 1 to 10−3 1016 to 1013
10 cm
1 mm to
Incandescent light bulb 10 to 1 Pa 0.1 to 0.01 1014
1 cm
Thermos bottle 1 to 0.01 Pa[1] 10−2 to 10−4 1cm to 1m 1012
1cm to 1000 14
Earth thermosphere 1 Pa to 100 nPa 10−3 to 10−10 10 to 106
km
Vacuum tube 10 µPa to 10 nPa 10−7 to 10−10
Cryopumped MBE
100 nPa to 1 nPa 10−9 to 10−11 1.105 km 109 to 104
chamber
approximately
Pressure on the Moon 10−11 4 X 105[31]
1 nPa
Interplanetary space 10[1]
Interstellar space 1[32]
Intergalactic space 10-6[1]

Measurement
Main article: Pressure measurement

Vacuum is measured in units of pressure. The SI unit of pressure is the pascal (symbol
Pa), but vacuum is usually measured in torrs, named for Torricelli, an early Italian
physicist (1608 - 1647). A torr is equal to the displacement of a millimeter of mercury
(mmHg) in a manometer with 1 torr equaling 133.3223684 pascals above absolute zero
pressure. Vacuum is often also measured using inches of mercury on the barometric scale
or as a percentage of atmospheric pressure in bars or atmospheres. Low vacuum is often
measured in inches of mercury (inHg), millimeters of mercury (mmHg) or kilopascals
(kPa) below atmospheric pressure. "Below atmospheric" means that the absolute pressure
is equal to the current atmospheric pressure (e.g. 29.92 inHg) minus the vacuum pressure
in the same units. Thus a vacuum of 26 inHg is equivalent to an absolute pressure of 4
inHg (29.92 inHg - 26 inHg).

In other words, most low vacuum gauges that read, for example, -28 inHg at full vacuum
are actually reporting 2 inHg, or 50.79 torr. Many inexpensive low vacuum gauges have a
margin of error and may report a vacuum of -30 inHg, or 0 torr but in practice this
generally requires a two stage rotary vane or other medium type of vacuum pump to go
much beyond (lower than) 25 torr.
A glass McLeod gauge, drained of mercury

Many devices are used to measure the pressure in a vacuum, depending on what range of
vacuum is needed.[33]

Hydrostatic gauges (such as the mercury column manometer) consist of a vertical


column of liquid in a tube whose ends are exposed to different pressures. The column
will rise or fall until its weight is in equilibrium with the pressure differential between the
two ends of the tube. The simplest design is a closed-end U-shaped tube, one side of
which is connected to the region of interest. Any fluid can be used, but mercury is
preferred for its high density and low vapour pressure. Simple hydrostatic gauges can
measure pressures ranging from 1 torr (100 Pa) to above atmospheric. An important
variation is the McLeod gauge which isolates a known volume of vacuum and
compresses it to multiply the height variation of the liquid column. The McLeod gauge
can measure vacuums as high as 10−6 torr (0.1 mPa), which is the lowest direct
measurement of pressure that is possible with current technology. Other vacuum gauges
can measure lower pressures, but only indirectly by measurement of other pressure-
controlled properties. These indirect measurements must be calibrated via a direct
measurement, most commonly a McLeod gauge.[34]

Mechanical or elastic gauges depend on a Bourdon tube, diaphragm, or capsule, usually


made of metal, which will change shape in response to the pressure of the region in
question. A variation on this idea is the capacitance manometer, in which the diaphragm
makes up a part of a capacitor. A change in pressure leads to the flexure of the diaphragm,
which results in a change in capacitance. These gauges are effective from 10−3 torr to
10−4 torr.

Thermal conductivity gauges rely on the fact that the ability of a gas to conduct heat
decreases with pressure. In this type of gauge, a wire filament is heated by running
current through it. A thermocouple or Resistance Temperature Detector (RTD) can then
be used to measure the temperature of the filament. This temperature is dependent on the
rate at which the filament loses heat to the surrounding gas, and therefore on the thermal
conductivity. A common variant is the Pirani gauge which uses a single platimum
filament as both the heated element and RTD. These gauges are accurate from 10 torr to
10−3 torr, but they are sensitive to the chemical composition of the gases being measured.

Ion gauges are used in ultrahigh vacuum. They come in two types: hot cathode and cold
cathode. In the hot cathode version an electrically heated filament produces an electron
beam. The electrons travel through the gauge and ionize gas molecules around them. The
resulting ions are collected at a negative electrode. The current depends on the number of
ions, which depends on the pressure in the gauge. Hot cathode gauges are accurate from
10−3 torr to 10−10 torr. The principle behind cold cathode version is the same, except that
electrons are produced in a discharge created by a high voltage electrical discharge. Cold
cathode gauges are accurate from 10−2 torr to 10−9 torr. Ionization gauge calibration is
very sensitive to construction geometry, chemical composition of gases being measured,
corrosion and surface deposits. Their calibration can be invalidated by activation at
atmospheric pressure or low vacuum. The composition of gases at high vacuums will
usually be unpredictable, so a mass spectrometer must be used in conjunction with the
ionization gauge for accurate measurement.[35]

Properties

As a vacuum approaches perfection, several properties of space approach non-zero


values. The ideal values which would be attained in an ideal vacuum are called free space
constants. Some common ones are as follows:

• The speed of light c approaches the speed of light in vacuum c0 299,792,458


m/s, but is always slower
• Index of refraction n approaches 1.0, but is always higher
• Electric permittivity (ε) approaches the electric constant ε0 ≈ 8.8541878176x10-12
farads per meter (F/m).
• Magnetic permeability (μ) approaches the magnetic constant μ0 4π×10−7 N/A2.
• Characteristic impedance (η) approaches the characteristic impedance of vacuum
Z0 ≈ 376.73 Ω.
Gravitation

Gravitation keeps the planets in orbit about


the Sun. (Not to scale)
Gravitation is a natural phenomenon by which objects with mass attract one another.[1]
In everyday life, gravitation is most commonly thought of as the agency which lends
weight to objects with mass. Gravitation compels dispersed matter to coalesce, thus it
accounts for the very existence of the Earth, the Sun, and most of the macroscopic objects
in the universe. It is responsible for keeping the Earth and the other planets in their orbits
around the Sun; for keeping the Moon in its orbit around the Earth, for the formation of
tides; for convection (by which fluid flow occurs under the influence of a temperature
gradient and gravity); for heating the interiors of forming stars and planets to very high
temperatures; and for various other phenomena that we observe. Modern physics
describes gravitation using the general theory of relativity, in which gravitation is a
consequence of the curvature of spacetime which governs the motion of inertial objects.
The simpler Newton's law of universal gravitation provides an excellent approximation
for most calculations.

The terms gravitation and gravity are mostly interchangeable in everyday use, but a
distinction may be made in scientific usage. "Gravitation" is a general term describing the
phenomenon by which bodies with mass are attracted to one another, while "gravity"
refers specifically to the net force exerted by the Earth on objects in its vicinity as well as
by other factors, such as the Earth's rotation.

History of gravitational theory


Main article: History of gravitational theory

Scientific revolution

Modern work on gravitational theory began with the work of Galileo Galilei in the late
16th and early 17th centuries. In his famous (though possibly apocryphal)[4] experiment
dropping balls from the Tower of Pisa, and later with careful measurements of balls
rolling down inclines, Galileo showed that gravitation accelerates all objects at the same
rate. This was a major departure from Aristotle's belief that heavier objects are
accelerated faster.[5] Galileo correctly postulated air resistance as the reason that lighter
objects may fall more slowly in an atmosphere. Galileo's work set the stage for the
formulation of Newton's theory of gravity.
Newton's theory of gravitation
Main article: Newton's law of universal gravitation

In 1687, English mathematician Sir Isaac Newton published Principia, which


hypothesizes the inverse-square law of universal gravitation. In his own words, “I
deduced that the forces which keep the planets in their orbs must [be] reciprocally as the
squares of their distances from the centers about which they revolve: and thereby
compared the force requisite to keep the Moon in her Orb with the force of gravity at the
surface of the Earth; and found them answer pretty nearly.”[6]

Newton's theory enjoyed its greatest success when it was used to predict the existence of
Neptune based on motions of Uranus that could not be accounted by the actions of the
other planets. Calculations by John Couch Adams and Urbain Le Verrier both predicted
the general position of the planet, and Le Verrier's calculations are what led Johann
Gottfried Galle to the discovery of Neptune.

Ironically, it was another discrepancy in a planet's orbit that helped to point out flaws in
Newton's theory. By the end of the 19th century, it was known that the orbit of Mercury
showed slight perturbations that could not be accounted for entirely under Newton's
theory, but all searches for another perturbing body (such as a planet orbiting the Sun
even closer than Mercury) had been fruitless. The issue was resolved in 1915 by Albert
Einstein's new General Theory of Relativity, which accounted for the small discrepancy
in Mercury's orbit.

Although Newton's theory has been superseded, most modern non-relativistic


gravitational calculations are still made using Newton's theory because it is a much
simpler theory to work with than General Relativity, and gives sufficiently accurate
results for most applications.

Gravitational torsion, weak equivalence principle and gravitational


gradient
See also: Eötvös experiment
General relativity
Loránd Eötvös published on surface tension
between 1876 and 1886. The Torsion or Eötvös
balance, designed by Hungarian Baron Loránd
Eötvös, is a sensitive instrument for measuring the Einstein field equations
density of underlying rock strata. The device Introduction to...
measures not only the direction of force of gravity, Mathematical formulation of...
but the change in the force of gravity's extent in
Resources
horizontal plane. It determines the distribution of
masses in the Earth's crust. The Eötvös torsion [show]Fundamental concepts
balance, an important instrument of geodesy and [show]Phenomena
geophysics throughout the whole world, studies the [show]Equations
Earth's physical properties. It is used for mine [show]Advanced theories
exploration, and also in the search for minerals,
[show]Solutions
[show]Scientists
This box: view • talk • edit
such as oil, coal and ores. Eötvös' law of capillarity (weak equivalence principle) served
as a basis for Einstein's theory of relativity. (Capillarity: the property or exertion of
capillary attraction of repulsion, a force that is the resultant of adhesion, cohesion, and
surface tension in liquids which are in contact with solids, causing the liquid surface to
rise - or be depressed...)[7][8] These experiments demonstrate that all objects fall at the
same rate with negligible friction (including air resistance). The simplest way to test the
weak equivalence principle is to drop two objects of different masses or compositions in
a vacuum, and see if they hit the ground at the same time. More sophisticated tests use a
torsion balance of a type invented by Loránd Eötvös. Satellite experiments are planned
for more accurate experiments in space.[9] They verify the weak principle.General
relativity

Main article: Introduction to general relativity

In general relativity, the effects of gravitation are ascribed to spacetime curvature


instead of a force. The starting point for general relativity is the equivalence principle,
which equates free fall with inertial motion, and describes free-falling inertial objects as
being accelerated relative to non-inertial observers on the ground.[10][11] In Newtonian
physics, however, no such acceleration can occur unless at least one of the objects is
being operated on by a force.

Einstein proposed that spacetime is curved by matter, and that free-falling objects are
moving along locally straight paths in curved spacetime. These straight lines are called
geodesics. Like Newton's First Law, Einstein's theory stated that if there is a force applied
to an object, it would deviate from the geodesics in spacetime.[12] For example, we are no
longer following the geodesics while standing because the mechanical resistance of the
Earth exerts an upward force on us. Thus, we are non-inertial on the ground. This
explains why moving along the geodesics in spacetime is considered inertial.

Einstein discovered the field equations of general relativity, which relate the presence of
matter and the curvature of spacetime and are named after him. The Einstein field
equations are a set of 10 simultaneous, non-linear, differential equations. The solutions of
the field equations are the components of the metric tensor of spacetime. A metric tensor
describes a geometry of spacetime. The geodesic paths for a spacetime are calculated
from the metric tensor.

Notable solutions of the Einstein field equations include:

• The Schwarzschild solution, which describes spacetime surrounding a spherically


symmetric non-rotating uncharged massive object. For compact enough objects,
this solution generated a black hole with a central singularity. For radial distances
from the center which are much greater than the Schwarzschild radius, the
accelerations predicted by the Schwarzschild solution are practically identical to
those predicted by Newton's theory of gravity.
• The Reissner-Nordström solution, in which the central object has an electrical
charge. For charges with a geometrized length which are less than the
geometrized length of the mass of the object, this solution produces black holes
with two event horizons.
• The Kerr solution for rotating massive objects. This solution also produces black
holes with multiple event horizons.
• The Kerr-Newman solution for charged, rotating massive objects. This solution
also produces black holes with multiple event horizons.
• The cosmological Robertson-Walker solution, which predicts the expansion of the
universe.

The tests of general relativity included:[13]

• General relativity accounts for the anomalous perihelion precession of Mercury.2


• The prediction that time runs slower at lower potentials has been confirmed by the
Pound-Rebka experiment, the Hafele-Keating experiment, and the GPS.
• The prediction of the deflection of light was first confirmed by Arthur Eddington
in 1919.[14][15] The Newtonian corpuscular theory also predicted a lesser deflection
of light, but Eddington found that the results of the expedition confirmed the
predictions of general relativity over those of the Newtonian theory. However this
interpretation of the results was later disputed.[16] More recent tests using radio
interferometric measurements of quasars passing behind the Sun have more
accurately and consistently confirmed the deflection of light to the degree
predicted by general relativity.[17] See also gravitational lensing.
• The time delay of light passing close to a massive object was first identified by
Irwin Shapiro in 1964 in interplanetary spacecraft signals.
• Gravitational radiation has been indirectly confirmed through studies of binary
pulsars.
• Alexander Friedmann in 1922 found that Einstein equations have non-stationary
solutions (even in the presence of the cosmological constant). In 1927 Georges
Lemaître showed that static solutions of the Einstein equations, which are possible
in the presence of the cosmological constant, are unstable, and therefore the static
universe envisioned by Einstein could not exist. Later, in 1931, Einstein himself
agreed with the results of Friedmann and Lemaître. Thus general relativity
predicted that the Universe had to be non-static—it had to either expand or
contract. The expansion of the universe discovered by Edwin Hubble in 1929
confirmed this prediction.[18]

Gravity and quantum mechanics


Main articles: Graviton and Quantum gravity

Several decades after the discovery of general relativity it was realized that general
relativity is incompatible with quantum mechanics.[19] It is possible to describe gravity in
the framework of quantum field theory like the other fundamental forces, such that the
attractive force of gravity arises due to exchange of virtual gravitons, in the same way as
the electromagnetic force arises from exchange of virtual photons.[20][21] This reproduces
general relativity in the classical limit. However, this approach fails at short distances of
the order of the Planck length,[22] where a more complete theory of quantum gravity (or a
new approach to quantum mechanics) is required. Many believe the complete theory to
be string theory,[23] or more currently M Theory.

Specifics
Earth's gravity
Main article: Earth's gravity

Every planetary body (including the Earth) is surrounded by its own gravitational field,
which exerts an attractive force on all objects. Assuming a spherically symmetrical planet
(a reasonable approximation), the strength of this field at any given point is proportional
to the planetary body's mass and inversely proportional to the square of the distance from
the center of the body.

The strength of the gravitational field is numerically equal to the acceleration of objects
under its influence, and its value at the Earth's surface, denoted g, is approximately
expressed below as the standard average.

g = 9.8 m/s2 = 32.2 ft/s2

This means that, ignoring air resistance, an object falling freely near the Earth's surface
increases its velocity with 9.8 m/s (32.2 ft/s or 22 mph) for each second of its descent.
Thus, an object starting from rest will attain a velocity of 9.8 m/s (32.2 ft/s) after one
second, 19.6 m/s (64.4 ft/s) after two seconds, and so on, adding 9.8 m/s (32.2 ft/s) to
each resulting velocity. Also, again ignoring air resistance, any and all objects, when
dropped from the same height, will hit the ground at the same time.

According to Newton's 3rd Law, the Earth itself experiences an equal and opposite force
to that acting on the falling object, meaning that the Earth also accelerates towards the
object (until the object hits the earth, then the Law of Conservation of Energy states that
it will move back with the same acceleration with which it initially moved forward,
canceling out the two forces of gravity.). However, because the mass of the Earth is huge,
the acceleration of the Earth by this same force is negligible, when measured relative to
the system's center of mass.

Equations for a falling body near the surface of the


Earth

Ball falling freely under gravity. See text for description.


Main article: Equations for a falling body

Under an assumption of constant gravity, Newton’s law of gravitation


simplifies to F = mg, where m is the mass of the body and g is a
constant vector with an average magnitude of 9.81 m/s². The
acceleration due to gravity is equal to this g. An initially-stationary
object which is allowed to fall freely under gravity drops a distance which is proportional
to the square of the elapsed time. The image on the right, spanning half a second, was
captured with a stroboscopic flash at 20 flashes per second. During the first 1/20th of a
second the ball drops one unit of distance (here, a unit is about 12 mm); by 2/20ths it has
dropped at total of 4 units; by 3/20ths, 9 units and so on.

Under the same constant gravity assumptions, the potential energy, Ep, of a body at height
h is given by Ep = mgh (or Ep = Wh, with W meaning weight). This expression is valid
only over small distances h from the surface of the Earth. Similarly, the expression

for the maximum height reached by a vertically projected body with velocity v is
useful for small heights and small initial velocities only. In case of large initial velocities
we have to use the principle of conservation of energy to find the maximum height
reached. This same expression can be solved for v to determine the velocity of an object

dropped from a height h immediately before hitting the ground, , assuming


negligible air resistance.

Gravity and astronomy


Main article: Gravitation (astronomy)

The discovery and application of Newton's law of gravity accounts for the detailed
information we have about the planets in our solar system, the mass of the Sun, the
distance to stars, quasars and even the theory of dark matter. Although we have not
traveled to all the planets nor to the Sun, we know their masses. These masses are
obtained by applying the laws of gravity to the measured characteristics of the orbit. In
space an object maintains its orbit because of the force of gravity acting upon it. Planets
orbit stars, stars orbit galactic centers, galaxies orbit a center of mass in clusters, and
clusters orbit in superclusters. The force of gravity is proportional to the mass of an
object and inversely proportional to the square of the distance between the objects.

Gravitational radiation
Main article: Gravitational wave

In general relativity, gravitational radiation is generated in situations where the curvature


of spacetime is oscillating, such as is the case with co-orbiting objects. The gravitational
radiation emitted by the solar system is far too small to measure. However, gravitational
radiation has been indirectly observed as an energy loss over time in binary pulsar
systems such as PSR 1913+16. It is believed that neutron star mergers and black hole
formation may create detectable amounts of gravitational radiation. Gravitational
radiation observatories such as LIGO have been created to study the problem. No
confirmed detections have been made of this hypothetical radiation, but as the science
behind LIGO is refined and as the instruments themselves are endowed with greater
sensitivity over the next decade, this may change.
Anomalies and discrepancies

There are some observations that are not adequately accounted for, which may point to
the need for better theories of gravity or perhaps be explained in other ways.

• Extra fast stars: Stars in galaxies follow a distribution of velocities where stars
on the outskirts are moving faster than they should according to the observed
distributions of normal matter. Galaxies within galaxy clusters show a similar
pattern. Dark matter, which would interact gravitationally but not
electromagnetically, would account for the discrepancy. Various modifications to
Newtonian dynamics have also been proposed.

• Pioneer anomaly: The two Pioneer spacecraft seem to be slowing down in a way
which has yet to be explained.[24]

• Flyby anomaly: Various spacecraft have experienced greater accelerations during


slingshot maneuvers than expected.

• Accelerating expansion: The expansion of the universe seems to be speeding up.


Dark energy has been proposed to explain this. A recent alternative explanation is
that the geometry of space is not homogeneous (due to clusters of galaxies) and
that when the data is reinterpreted to take this into account, the expansion is not
speeding up after all[25], however this conclusion is disputed[26].

• Anomalous increase of the AU: Recent measurements indicate that planetary


orbits are expanding faster than if this was solely through the sun losing mass by
radiating energy.

• Extra energetic photons: Photons travelling through galaxy clusters should gain
energy and then lose it again on the way out. The accelerating expansion of the
universe should stop the photons returning all the energy, but even taking this into
account photons from the cosmic background radiation gain twice as much energy
as expected. This may indicate that gravity falls off faster than inverse-squared at
certain distance scales[27].

• Dark flow: Surveys of galaxy motions have detected a mystery dark flow towards
an unseen mass. Such a large mass is too large to have accumulated since the big
bang using current models and may indicate that gravity falls off slower than
inverse-squared at certain distance scales[27].

• Extra massive hydrogen clouds: The spectral lines of the Lyman alpha forest
suggest that hydrogen clouds are more clumped together at certain scales than
expected and, like dark flow, may indicate that gravity falls off slower than
inverse-squared at certain distance scales[27].
Alternative theories
Main article: Alternatives to general relativity

Historical alternative theories

• Aristotelian theory of gravity


• Le Sage's theory of gravitation (1784) also called LeSage gravity, proposed by
Georges-Louis Le Sage, based on a fluid-based explanation where a light gas fills
the entire universe.
• Nordström's theory of gravitation (1912, 1913), an early competitor of general
relativity.
• Whitehead's theory of gravitation (1922), another early competitor of general
relativity.

Recent alternative theories

• Brans-Dicke theory of gravity (1961)


• Induced gravity (1967), a proposal by Andrei Sakharov according to which
general relativity might arise from quantum field theories of matter
• In the modified Newtonian dynamics (MOND) (1981), Mordehai Milgrom
proposes a modification of Newton's Second Law of motion for small
accelerations
• The self-creation cosmology theory of gravity (1982) by G.A. Barber in which the
Brans-Dicke theory is modified to allow mass creation
• Nonsymmetric gravitational theory (NGT) (1994) by John Moffat
• Tensor-vector-scalar gravity (TeVeS) (2004), a relativistic modification of MOND
by Jacob Bekenstein
Acid rain
Acid rain is rain or any other form of precipitation that is unusually acidic. It has harmful
effects on plants, aquatic animals, and infrastructure. Acid rain is mostly caused by
human emissions of sulfur and nitrogen compounds which react in the atmosphere to
produce acids. In recent years, many governments have introduced laws to reduce these
emissions.

Definition

"Acid rain" is a popular term referring to the deposition of wet (rain, snow, sleet, fog and
cloudwater, dew) and dry (acidifying particles and gases) acidic components. A more
accurate term is “acid deposition”. Distilled water, which contains no carbon dioxide, has
a neutral pH of 7. Liquids with a pH less than 7 are acidic, and those with a pH greater
than 7 are bases. “Clean” or unpolluted rain has a slightly acidic pH of about 5.2, because
carbon dioxide and water in the air react together to form carbonic acid, a weak acid (pH
5.6 in distilled water), but unpolluted rain also contains other chemicals.[1]

H2O (l) + CO2 (g) → H2CO3 (aq)

Carbonic acid then can ionize in water forming low concentrations of hydronium and
carbonate ions:
2 H2O (l) + H2CO3 (aq) CO32− (aq) + 2 H3O+ (aq)

History

Since the Industrial Revolution, emissions of sulfur dioxide and nitrogen oxides to the
atmosphere have increased.[2] [3] In 1852, Robert Angus Smith was the first to show the
relationship between acid rain and atmospheric pollution in Manchester, England.[4]
Though acidic rain was discovered in 1852, it was not until the late 1960s that scientists
began widely observing and studying the phenomenon. The term "acid rain" was
generated in 1972.[5] Canadian Harold Harvey was among the first to research a "dead"
lake. Public awareness of acid rain in the U.S increased in the 1970s after the New York
Times promulgated reports from the Hubbard Brook Experimental Forest in New
Hampshire of the myriad deleterious environmental effects demonstrated to result from
it.[6][7]

Occasional pH readings in rain and fog water of well below 2.4 (the acidity of vinegar)
have been reported in industrialized areas.[2] Industrial acid rain is a substantial problem
in Europe, China,[8][9] Russia and areas down-wind from them. These areas all burn sulfur-
containing coal to generate heat and electricity.[10] The problem of acid rain not only has
increased with population and industrial growth, but has become more widespread. The
use of tall smokestacks to reduce local pollution has contributed to the spread of acid rain
by releasing gases into regional atmospheric circulation.[11][12] Often deposition occurs a
considerable distance downwind of the emissions, with mountainous regions tending to
receive the greatest deposition (simply because of their higher rainfall). An example of
this effect is the low pH of rain (compared to the local emissions) which falls in
Scandinavia.[13]

Emissions of chemicals leading to acidification

The most important gas which leads to acidification is sulfur dioxide. Emissions of
nitrogen oxides which are oxidized to form nitric acid are of increasing importance due to
stricter controls on emissions of sulfur containing compounds. 70 Tg(S) per year in the
form of SO2 comes from fossil fuel combustion and industry, 2.8 Tg(S) from wildfires
and 7-8 Tg(S) per year from volcanoes.[14]

Natural phenomena

The principal natural phenomena that contribute acid-producing gases to the atmosphere
are emissions from volcanoes and those from biological processes that occur on the land,
in wetlands, and in the oceans. The major biological source of sulfur containing
compounds is dimethyl sulfide.

Acidic deposits have been detected in glacial ice thousands of years old in remote parts of
the globe.[15]
Human activity

The coal-fired Gavin Power Plant in Cheshire, Ohio

The principal cause of acid rain is sulfur and nitrogen compounds from human sources,
such as electricity generation, factories, and motor vehicles. Coal power plants are one of
the most polluting. The gases can be carried hundreds of kilometres in the atmosphere
before they are converted to acids and deposited. In the past, factories had short funnels
to let out smoke, but this caused many problems locally; thus, factories now have taller
smoke funnels. However, dispersal from these taller stacks causes pollutants to be carried
farther, causing widespread ecological damage.

Chemical processes
Gas phase chemistry

In the gas phase sulfur dioxide is oxidized by reaction with the hydroxyl radical via an
intermolecular reaction:

SO2 + OH· → HOSO2·

which is followed by:

HOSO2· + O2 → HO2· + SO3

In the presence of water, sulfur trioxide (SO3) is converted rapidly to sulfuric acid:

SO3 (g) + H2O (l) → H2SO4 (l)

Nitric acid is formed by the reaction of OH with nitrogen dioxide:

NO2 + OH· → HNO3

For more information see Seinfeld and Pandis (1998).[4]

Chemistry in cloud droplets

When clouds are present, the loss rate of SO2 is faster than can be explained by gas phase
chemistry alone. This is due to reactions in the liquid water droplets.
Hydrolysis

Sulfur dioxide dissolves in water and then, like carbon dioxide, hydrolyses in a series of
equilibrium reactions:

SO2 (g) + H2O SO2·H2O


SO2·H2O H+ + HSO3−
HSO3- H+ + SO32−
Oxidation

There are a large number of aqueous reactions that oxidize sulfur from S(IV) to S(VI),
leading to the formation of sulfuric acid. The most important oxidation reactions are with
ozone, hydrogen peroxide and oxygen (reactions with oxygen are catalyzed by iron and
manganese in the cloud droplets).

For more information see Seinfeld and Pandis (1998).[4]

Acid deposition

Processes involved in acid deposition (note that only SO2 and NOx play a significant role
in acid rain).

Wet deposition

Wet deposition of acids occurs when any form of precipitation (rain, snow, etc.) removes
acids from the atmosphere and delivers it to the Earth's surface. This can result from the
deposition of acids produced in the raindrops (see aqueous phase chemistry above) or by
the precipitation removing the acids either in clouds or below clouds. Wet removal of
both gases and aerosols are both of importance for wet deposition.

Dry deposition

Acid deposition also occurs via dry deposition in the absence of precipitation. This can be
responsible for as much as 20 to 60% of total acid deposition.[16] This occurs when
particles and gases stick to the ground, plants or other surfaces.
Adverse effects

This chart shows that not all fish, shellfish, or the insects that they eat can tolerate the
same amount of acid; for example, frogs can tolerate water that is more acidic (i.e., has a
lower pH) than trout.

Acid rain has been shown to have adverse impacts on forests, freshwaters and soils,
killing insect and aquatic life-forms as well as causing damage to buildings and having
impacts on human health.

Surface waters and aquatic animals

Both the lower pH and higher aluminum concentrations in surface water that occur as a
result of acid rain can cause damage to fish and other aquatic animals. At pHs lower than
5 most fish eggs will not hatch and lower pHs can kill adult fish. As lakes and rivers
become more acidic biodiversity is reduced. Acid rain has eliminated insect life and some
fish species, including the brook trout in some lakes, streams, and creeks in
geographically sensitive areas, such as the Adirondack Mountains of the United States.[17]
However, the extent to which acid rain contributes directly or indirectly via runoff from
the catchment to lake and river acidity (i.e., depending on characteristics of the
surrounding watershed) is variable. The United States Environmental Protection Agency's
(EPA) website states: "Of the lakes and streams surveyed, acid rain caused acidity in 75
percent of the acidic lakes and about 50 percent of the acidic streams".[17]

Soils

Soil biology and chemistry can be seriously damaged by acid rain. Some microbes are
unable to tolerate changes to low pHs and are killed.[18] The enzymes of these microbes
are denatured (changed in shape so they no longer function) by the acid. The hydronium
ions of acid rain also mobilize toxins such as aluminium, and leach away essential
nutrients and minerals such as magnesium.[19]

2 H+ (aq) + Mg2+ (clay) 2 H+ (clay) + Mg2+ (aq)

Soil chemistry can be dramatically changed when base cations, such as calcium and
magnesium, are leached by acid rain thereby affecting sensitive species, such as sugar
maple (Acer saccharum).[20][21]
Forests and other vegetation

Effect of acid rain on a forest, Jizera Mountains, Czech Republic

Adverse effects may be indirectly related to acid rain, like the acid's effects on soil (see
above) or high concentration of gaseous precursors to acid rain. High altitude forests are
especially vulnerable as they are often surrounded by clouds and fog which are more
acidic than rain.

Other plants can also be damaged by acid rain but the effect on food crops is minimized
by the application of lime and fertilizers to replace lost nutrients. In cultivated areas,
limestone may also be added to increase the ability of the soil to keep the pH stable, but
this tactic is largely unusable in the case of wilderness lands. When calcium is leached
from the needles of red spruce, these trees become less cold tolerant and exhibit winter
injury and even death.[22][23]

Human health

Scientists have suggested direct links to human health.[24] Fine particles, a large fraction
of which are formed from the same gases as acid rain (sulfur dioxide and nitrogen
dioxide), have been shown to cause illness and premature deaths such as cancer and other
diseases.[25] For more information on the health effects of aerosols see particulate health
effects.

Other adverse effects

Effect of acid rain on statues

Acid rain can also cause damage to certain building materials and historical monuments.
This results when the sulfuric acid in the rain chemically reacts with the calcium
compounds in the stones (limestone, sandstone, marble and granite) to create gypsum,
which then flakes off.
CaCO3 (s) + H2SO4 (aq) CaSO4 (aq) + CO2 (g) + H2O (l)

This result is also commonly seen on old gravestones where the acid rain can cause the
inscription to become completely illegible. Acid rain also causes an increased rate of
oxidation for iron.[26] Visibility is also reduced by sulfate and nitrate aerosols and particles
in the atmosphere.[27]

Affected areas

Particularly badly affected places around the globe include most of Europe (particularly
Scandinavia with many lakes with acidic water containing no life and many trees dead)
many parts of the United States (states like New York are very badly affected) and South
Western Canada. Other affected areas include the South Eastern coast of China and
Taiwan.

Potential problem areas in the future

Places like much of South Asia (Indonesia, Malaysia and Thailand), Western South
Africa (the country), Southern India and Sri Lanka and even West Africa (countries like
Ghana, Togo and Nigeria) could all be prone to acidic rainfall in the future.

Prevention methods
Technical solutions

In the United States, many coal-burning power plants use Flue gas desulfurization (FGD)
to remove sulfur-containing gases from their stack gases. An example of FGD is the wet
scrubber which is commonly used in the U.S. and many other countries. A wet scrubber is
basically a reaction tower equipped with a fan that extracts hot smoke stack gases from a
power plant into the tower. Lime or limestone in slurry form is also injected into the
tower to mix with the stack gases and combine with the sulfur dioxide present. The
calcium carbonate of the limestone produces pH-neutral calcium sulfate that is physically
removed from the scrubber. That is, the scrubber turns sulfur pollution into industrial
sulfates.

In some areas the sulfates are sold to chemical companies as gypsum when the purity of
calcium sulfate is high. In others, they are placed in landfill. However, the effects of acid
rain can last for generations, as the effects of pH level change can stimulate the continued
leaching of undesirable chemicals into otherwise pristine water sources, killing off
vulnerable insect and fish species and blocking efforts to restore native life.

Automobile emissions control reduces emissions of nitrogen oxides from motor vehicles.

International treaties
Related terms:
Acid Rain Program

A number of international treaties on the long range transport of atmospheric pollutants


have been agreed e.g. Sulphur Emissions Reduction Protocol under the Convention on
Long-Range Transboundary Air Pollution.

Emissions trading
Main article: Emissions trading

In this regulatory scheme, every current polluting facility is given or may purchase on an
open market an emissions allowance for each unit of a designated pollutant it emits.
Operators can then install pollution control equipment, and sell portions of their
emissions allowances they no longer need for their own operations, thereby recovering
some of the capital cost of their investment in such equipment. The intention is to give
operators economic incentives to install pollution controls.

The first emissions trading market was established in the United States by enactment of
the Clean Air Act Amendments of 1990. The overall goal of the Acid Rain Program
established by the Act[28] is to achieve significant environmental and public health
benefits through reductions in emissions of sulfur dioxide (SO2) and nitrogen oxides
(NOx), the primary causes of acid rain. To achieve this goal at the lowest cost to society,
the program employs both regulatory and market based approaches for controlling air
pollution

You might also like