You are on page 1of 27

Pollution

I INTRODUCTION

Pollution, contamination of Earth’s environment with materials that interfere with human
health, the quality of life, or the natural functioning of ecosystems (living organisms and
their physical surroundings). Although some environmental pollution is a result of natural
causes such as volcanic eruptions, most is caused by human activities.

There are two main categories of polluting materials, or pollutants. Biodegradable


pollutants are materials, such as sewage, that rapidly decompose by natural processes.
These pollutants become a problem when added to the environment faster than they can
decompose (see Sewage Disposal). Nondegradable pollutants are materials that either do
not decompose or decompose slowly in the natural environment. Once contamination
occurs, it is difficult or impossible to remove these pollutants from the environment.

Nondegradable compounds such as dichlorodiphenyltrichloroethane (DDT), dioxins,


polychlorinated biphenyls (PCBs), and radioactive materials can reach dangerous levels of
accumulation as they are passed up the food chain into the bodies of progressively larger
animals. For example, molecules of toxic compounds may collect on the surface of aquatic
plants without doing much damage to the plants. A small fish that grazes on these plants
accumulates a high concentration of the toxin. Larger fish or other carnivores that eat the
small fish will accumulate even greater, and possibly life-threatening, concentrations of the
compound. This process is known as bioaccumulation.

II IMPACTS OF POLLUTION

Because humans are at the top of the food chain, they are particularly vulnerable to the
effects of nondegradable pollutants. This was clearly illustrated in the 1950s and 1960s
when residents living near Minamata Bay, Japan, developed nervous disorders, tremors,
and paralysis in a mysterious epidemic. More than 400 people died before authorities
discovered that a local industry had released mercury into Minamata Bay. This highly toxic
element accumulated in the bodies of local fish and eventually in the bodies of people who
consumed the fish. More recently research has revealed that many chemical pollutants,
such as DDT and PCBs, mimic sex hormones and interfere with the human body’s
reproductive and developmental functions. These substances are known as endocrine
disrupters. See Occupational and Environmental Diseases.

Pollution also has a dramatic effect on natural resources. Ecosystems such as forests,
wetlands, coral reefs, and rivers perform many important services for Earth’s environment.
They enhance water and air quality, provide habitat for plants and animals, and provide
food and medicines. Any or all of these ecosystem functions may be impaired or destroyed
by pollution. Moreover, because of the complex relationships among the many types of
organisms and ecosystems, environmental contamination may have far-reaching
consequences that are not immediately obvious or that are difficult to predict. For instance,
scientists can only speculate on some of the potential impacts of the depletion of the ozone
layer, the protective layer in the atmosphere that shields Earth from the Sun’s harmful
ultraviolet rays.

Another major effect of pollution is the tremendous cost of pollution cleanup and
prevention. The global effort to control emissions of carbon dioxide, a gas produced from
the combustion of fossil fuels such as coal or oil, or of other organic materials like wood, is
one such example. The cost of maintaining annual national carbon dioxide emissions at
1990 levels is estimated to be 2 percent of the gross domestic product for developed
countries. Expenditures to reduce pollution in the United States in 1993 totaled $109
billion: $105.4 billion on reduction, $1.9 billion on regulation, and $1.7 billion on research
and development. Twenty-nine percent of the total cost went toward air pollution, 36
percent to water pollution, and 36 percent to solid waste management.

In addition to its effects on the economy, health, and natural resources, pollution has social
implications. Research has shown that low-income populations and minorities do not
receive the same protection from environmental contamination as do higher-income
communities. Toxic waste incinerators, chemical plants, and solid waste dumps are often
located in low-income communities because of a lack of organized, informed community
involvement in municipal decision-making processes.

III TYPES OF POLLUTION

Pollution exists in many forms and affects many different aspects of Earth’s environment.
Point-source pollution comes from specific, localized, and identifiable sources, such as
sewage pipelines or industrial smokestacks. Nonpoint-source pollution comes from
dispersed or uncontained sources, such as contaminated water runoff from urban areas or
automobile emissions.

The effects of these pollutants may be immediate or delayed. Primary effects of pollution
occur immediately after contamination occurs, such as the death of marine plants and
wildlife after an oil spill at sea. Secondary effects may be delayed or may persist in the
environment into the future, perhaps going unnoticed for many years. DDT, a
nondegradable compound, seldom poisons birds immediately, but gradually accumulates
in their bodies. Birds with high concentrations of this pesticide lay thin-shelled eggs that
fail to hatch or produce deformed offspring. These secondary effects, publicized by Rachel
Carson in her 1962 book, Silent Spring, threatened the survival of species such as the bald
eagle and peregrine falcon, and aroused public concern over the hidden effects of
nondegradable chemical compounds.

A Air Pollution

Human contamination of Earth’s atmosphere can take many forms and has existed since
humans first began to use fire for agriculture, heating, and cooking. During the Industrial
Revolution of the 18th and 19th centuries, however, air pollution became a major problem.
As early as 1661 British author and founding member of the British Royal Society John
Evelyn reported of London in his treatise Fumifugium, “… the weary Traveller, at many
Miles distance, sooner smells, than sees the City to which he repairs. This is that pernicious
Smoake which fullyes all her Glory, superinducing a sooty Crust or Furr upon all that it
lights.…”

Urban air pollution is commonly known as smog. The dark London smog that Evelyn wrote
of is generally a smoky mixture of carbon monoxide and organic compounds from
incomplete combustion (burning) of fossil fuels such as coal, and sulfur dioxide from
impurities in the fuels. As the smog ages and reacts with oxygen, organic and sulfuric acids
condense as droplets, increasing the haze. Smog developed into a major health hazard by
the 20th century. In 1948, 19 people died and thousands were sickened by smog in the
small U.S. steel-mill town of Donora, Pennsylvania. In 1952, about 4,000 Londoners died of
its effects.

A second type of smog, photochemical smog, began reducing air quality over large cities
like Los Angeles in the 1930s. This smog is caused by combustion in car, truck, and
airplane engines, which produce nitrogen oxides and release hydrocarbons from unburned
fuels. Sunlight causes the nitrogen oxides and hydrocarbons to combine and turn oxygen
into ozone, a chemical agent that attacks rubber, injures plants, and irritates lungs. The
hydrocarbons are oxidized into materials that condense and form a visible, pungent haze.

Eventually most pollutants are washed out of the air by rain, snow, fog, or mist, but only
after traveling large distances, sometimes across continents. As pollutants build up in the
atmosphere, sulfur and nitrogen oxides are converted into acids that mix with rain. This
acid rain falls in lakes and on forests, where it can lead to the death of fish and plants, and
damage entire ecosystems. Eventually the contaminated lakes and forests may become
lifeless. Regions that are downwind of heavily industrialized areas, such as Europe and the
eastern United States and Canada, are the hardest hit by acid rain. Acid rain can also affect
human health and man-made objects; it is slowly dissolving historic stone statues and
building facades in London, Athens, and Rome.

One of the greatest challenges caused by air pollution is global warming, an increase in
Earth’s temperature due to the buildup of certain atmospheric gases such as carbon
dioxide. With the heavy use of fossil fuels in the 20th century, atmospheric concentrations
of carbon dioxide have risen dramatically. Carbon dioxide and other gases, known as
greenhouse gases, reduce the escape of heat from the planet without blocking radiation
coming from the Sun. Because of this greenhouse effect, average global temperatures are
expected to rise 1.4 to 5.8 Celsius degrees (2.5 to 10.4 Fahrenheit degrees) by the year
2100. Although this trend appears to be a small change, the increase would make the
Earth warmer than it has been in the last 125,000 years, possibly changing climate
patterns, affecting crop production, disrupting wildlife distributions, and raising the sea
level.

Air pollution can also damage the upper atmospheric region known as the stratosphere.
Excessive production of chlorine-containing compounds such as chlorofluorocarbons (CFCs)
(compounds formerly used in refrigerators, air conditioners, and in the manufacture of
polystyrene products) has depleted the stratospheric ozone layer, creating a hole above
Antarctica that lasts for several weeks each year. As a result, exposure to the Sun’s
harmful rays has damaged aquatic and terrestrial wildlife and threatens human health in
high-latitude regions of the northern and southern hemispheres.

B Water Pollution

The demand for fresh water rises continuously as the world’s population grows. From 1940
to 1990 withdrawals of fresh water from rivers, lakes, reservoirs, and other sources
increased fourfold. Of the water consumed in the United States in 1995, 39 percent was
used for irrigation, 39 percent was used for electric power generation, and 12 percent was
used for other utilities; industry and mining used 7 percent, and the rest was used for
agricultural livestock and commercial purposes.

Sewage, industrial wastes, and agricultural chemicals such as fertilizers and pesticides are
the main causes of water pollution. The U.S. Environmental Protection Agency (EPA)
reports that about 37 percent of the country’s lakes and estuaries, and 36 percent of its
rivers, are too polluted for basic uses such as fishing or swimming during all or part of the
year. In developing nations, more than 95 percent of urban sewage is discharged untreated
into rivers and bays, creating a major human health hazard.

Water runoff, a nonpoint source of pollution, carries fertilizing chemicals such as


phosphates and nitrates from agricultural fields and yards into lakes, streams, and rivers.
These combine with the phosphates and nitrates from sewage to speed the growth of
algae, a type of plantlike organism. The water body may then become choked with
decaying algae, which severely depletes the oxygen supply. This process, called
eutrophication, can cause the death of fish and other aquatic life. Agricultural runoff may
be to blame for the growth of a toxic form of algae called Pfiesteria piscicida, which was
responsible for killing large amounts of fish in bodies of water from the Delaware Bay to the
Gulf of Mexico in the late 1990s. Runoff also carries toxic pesticides and urban and
industrial wastes into lakes and streams.

Erosion, the wearing away of topsoil by wind and rain, also contributes to water pollution.
Soil and silt (a fine sediment) washed from logged hillsides, plowed fields, or construction
sites, can clog waterways and kill aquatic vegetation. Even small amounts of silt can
eliminate desirable fish species. For example, when logging removes the protective plant
cover from hillsides, rain may wash soil and silt into streams, covering the gravel beds that
trout or salmon use for spawning.

The marine fisheries supported by ocean ecosystems are an essential source of protein,
particularly for people in developing countries. Yet pollution in coastal bays, estuaries, and
wetlands threatens fish stocks already depleted by overfishing. In 1989, 260,000 barrels of
oil spilled from the oil tanker Exxon Valdez into Alaska’s Prince William Sound, a pristine
and rich fishing ground. In 1999 there were 8,539 reported spills in and around U.S. waters,
involving 4.4 billion liters (1.2 billion gallons) of oil.
C Soil Pollution

Soil is a mixture of mineral, plant, and animal materials that forms during a long process
that may take thousands of years. It is necessary for most plant growth and is essential for
all agricultural production. Soil pollution is a buildup of toxic chemical compounds, salts,
pathogens (disease-causing organisms), or radioactive materials that can affect plant and
animal life.

Unhealthy soil management methods have seriously degraded soil quality, caused soil
pollution, and enhanced erosion. Treating the soil with chemical fertilizers, pesticides, and
fungicides interferes with the natural processes occurring within the soil and destroys
useful organisms such as bacteria, fungi, and other microorganisms. For instance,
strawberry farmers in California fumigate the soil with methyl bromide to destroy
organisms that may harm young strawberry plants. This process indiscriminately kills even
beneficial microorganisms and leaves the soil sterile and dependent upon fertilizer to
support plant growth. This results in heavy fertilizer use and increases polluted runoff into
lakes and streams.

Improper irrigation practices in areas with poorly drained soil may result in salt deposits
that inhibit plant growth and may lead to crop failure. In 2000 BC, the ancient Sumerian
cities of the southern Tigris-Euphrates Valley in Mesopotamia depended on thriving
agriculture. By 1500 BC, these cities had collapsed largely because of crop failure due to
high soil salinity. The same soil pollution problem exists today in the Indus Valley in
Pakistan, the Nile Valley in Egypt, and the Imperial Valley in California.

D Solid Waste

Solid wastes are unwanted solid materials such as garbage, paper, plastics and other
synthetic materials, metals, and wood. Billions of tons of solid waste are thrown out
annually. The United States alone produces about 200 million metric tons of municipal solid
waste each year (see Solid Waste Disposal). A typical American generates an average of 2
kg (4 lb) of solid waste each day. Cities in economically developed countries produce far
more solid waste per capita than those in developing countries. Moreover, waste from
developed countries typically contains a high percentage of synthetic materials that take
longer to decompose than the primarily biodegradable waste materials of developing
countries.

Areas where wastes are buried, called landfills, are the cheapest and most common
disposal method for solid wastes worldwide. But landfills quickly become overfilled and
may contaminate air, soil, and water. Incineration, or burning, of waste reduces the volume
of solid waste but produces dense ashen wastes (some of which become airborne) that
often contain dangerous concentrations of hazardous materials such as heavy metals and
toxic compounds. Composting, using natural biological processes to speed the
decomposition of organic wastes, is an effective strategy for dealing with organic garbage
and produces a material that can be used as a natural fertilizer. Recycling, extracting and
reusing certain waste materials, has become an important part of municipal solid waste
strategies in developed countries. According to the EPA, more than one-fourth of the
municipal solid waste produced in the United States is now recycled or composted.
Recycling also plays a significant, informal role in solid waste management for many Asian
countries, such as India, where organized waste-pickers comb streets and dumps for items
such as plastics, which they use or resell.

Expanding recycling programs worldwide can help reduce solid waste pollution, but the key
to solving severe solid waste problems lies in reducing the amount of waste generated.
Waste prevention, or source reduction, such as altering the way products are designed or
manufactured to make them easier to reuse, reduces the high costs associated with
environmental pollution.

E Hazardous Waste

Hazardous wastes are solid, liquid, or gas wastes that may be deadly or harmful to people
or the environment and tend to be persistent or nondegradable in nature. Such wastes
include toxic chemicals and flammable or radioactive substances, including industrial
wastes from chemical plants or nuclear reactors, agricultural wastes such as pesticides and
fertilizers, medical wastes, and household hazardous wastes such as toxic paints and
solvents.

About 400 million metric tons of hazardous wastes are generated each year. The United
States alone produces about 250 million metric tons—70 percent from the chemical
industry. The use, storage, transportation, and disposal of these substances pose serious
environmental and health risks. Even brief exposure to some of these materials can cause
cancer, birth defects, nervous system disorders, and death. Large-scale releases of
hazardous materials may cause thousands of deaths and contaminate air, water, and soil
for many years. The world’s worst nuclear reactor accident took place near Chernobyl’,
Ukraine, in 1986 (see Chernobyl’ Accident). The accident killed at least 31 people, forced
the evacuation and relocation of more than 200,000 more, and sent a plume of radioactive
material into the atmosphere that contaminated areas as far away as Norway and the
United Kingdom.

Until the Minamata Bay contamination was discovered in Japan in the 1960s and 1970s,
most hazardous wastes were legally dumped in solid waste landfills, buried, or dumped
into lakes, rivers, and oceans. Legal regulations now restrict how such materials may be
used or disposed, but such laws are difficult to enforce and often contested by industry. It
is not uncommon for industrial firms in developed countries to pay poorer countries to
accept shipments of solid and hazardous wastes, a practice that has become known as the
waste trade. Moreover, cleaning up the careless dumping of the mid-20th century is
costing billions of dollars and progressing very slowly, if at all. The United States has an
estimated 217,000 hazardous waste dumps that need immediate action. Cleaning them up
could take more than 30 years and cost $187 billion.
Hazardous wastes of particular concern are the radioactive wastes from the nuclear power
and weapons industries. To date there is no safe method for permanent disposal of old fuel
elements from nuclear reactors. Most are kept in storage facilities at the original reactor
sites where they were generated. With the end of the Cold War, nuclear warheads that are
decommissioned, or no longer in use, also pose storage and disposal problems.

F Noise Pollution

Unwanted sound, or noise, such as that produced by airplanes, traffic, or industrial


machinery, is considered a form of pollution. Noise pollution is at its worst in densely
populated areas. It can cause hearing loss, stress, high blood pressure, sleep loss,
distraction, and lost productivity.

Sounds are produced by objects that vibrate at a rate that the ear can detect. This rate is
called frequency and is measured in hertz, or vibrations per second. Most humans can hear
sounds between 20 and 20,000 hertz, while dogs can hear high-pitched sounds up to
50,000 hertz. While high-frequency sounds tend to be more hazardous and more annoying
to hearing than low-frequency sounds, most noise pollution damage is related to the
intensity of the sound, or the amount of energy it has. Measured in decibels, noise intensity
can range from zero, the quietest sound the human ear can detect, to over 160 decibels.
Conversation takes place at around 40 decibels, a subway train is about 80 decibels, and a
rock concert is from 80 to 100 decibels. The intensity of a nearby jet taking off is about 110
decibels. The threshold for pain, tissue damage, and potential hearing loss in humans is
120 decibels. Long-lasting, high-intensity sounds are the most damaging to hearing and
produce the most stress in humans.

Solutions to noise pollution include adding insulation and sound-proofing to doors, walls,
and ceilings; using ear protection, particularly in industrial working areas; planting
vegetation to absorb and screen out noise pollution; and zoning urban areas to maintain a
separation between residential areas and zones of excessive noise.

IV HISTORY

Much of what we know of ancient civilizations comes from the wastes they left behind.
Refuse such as animal skeletons and implements from stone age cave dwellings in Europe,
China, and the Middle East helps reveal hunting techniques, diet, clothing, tool usage, and
the use of fire for cooking. Prehistoric refuse heaps, or middens, discovered by
archaeologists in coastal areas of North America reveal information about the shellfish diet
and eating habits of Native Americans who lived more than 10,000 years ago.

As humans developed new technologies, the magnitude and severity of pollution


increased. Many historians speculate that the extensive use of lead plumbing for drinking
water in Rome caused chronic lead poisoning in those who could afford such plumbing. The
mining and smelting of ores that accompanied the transition from the Stone Age to the
Metal Age resulted in piles of mining wastes that spread potentially toxic elements such as
mercury, copper, lead, and nickel throughout the environment.

Evidence of pollution during the early Industrial Revolution is widespread. Samples of hair
from historical figures such as Newton and Napoleon show the presence of toxic elements
such as antimony and mercury. By the 1800s, certain trades were associated with
characteristic occupational diseases: Chimney sweeps contracted cancer of the scrotum
(the external sac of skin enclosing the testes, or reproductive glands) from hydrocarbons in
chimney soot; hatters became disoriented, or “mad,” from nerve-destroying mercury salts
used to treat felt fabric; and bootblacks suffered liver damage from boot polish solvents.

During the 20th century, pollution evolved from a mainly localized problem to one of global
consequences in which pollutants not only persisted in the environment, but changed
atmospheric and climatic conditions. The Minamata Bay disaster was the first major
indication that humans would need to pay more attention to their waste products and
waste disposal practices, in particular, hazardous waste disposal. In the years that
followed, many more instances of neglect or carelessness resulted in dangerous levels of
contamination. In 1976 an explosion at a chemical factory in Seveso, Italy, released clouds
of toxic dioxin into the area, exposing hundreds of residents and killing thousands of
animals that ate exposed food. In 1978 it was discovered that the Love Canal housing
development in New York State was built on a former chemical waste dump. The
development was declared uninhabitable. The world’s worst industrial accident occurred in
Bhopal, India, in 1984. A deadly gas leaked from an American chemical plant, killing more
than 3,800 people and injuring more than 200,000.

The 1986 Chernobyl’ nuclear reactor accident demonstrated the dangerous contamination
effects of large, uncontained disasters. In an unprecedented action, pollution was used as a
military tactic in 1991 during the conflict in the Persian Gulf. The Iraqi military intentionally
released as much as 1 billion liters (336 million gallons) of crude oil into the Persian Gulf
and set fire to more than 700 oil wells, sending thick, black smoke into the atmosphere
over the Middle East.

V CONTROLLING POLLUTION

Because of the many environmental tragedies of the mid-20th century, many nations
instituted comprehensive regulations designed to repair the past damage of uncontrolled
pollution and prevent future environmental contamination. In the United States, the Clean
Air Act (1970) and its amendments significantly reduced certain types of air pollution, such
as sulfur dioxide emissions. The Clean Water Act (1977) and Safe Drinking Water Act
(1974) regulated pollution discharges and set water quality standards. The Toxic
Substances Control Act (1976) and the Resource Conservation and Recovery Act (1976)
provided for the testing and control of toxic and hazardous wastes. In 1980 Congress
passed the Comprehensive Environmental Response, Compensation, and Liability Act
(CERCLA), also known as Superfund, to provide funds to clean up the most severely
contaminated hazardous waste sites. These and several other federal and state laws
helped limit uncontrolled pollution, but progress has been slow and many severe
contamination problems remain due to lack of funds for cleanup and enforcement.

International agreements have also played a role in reducing global pollution. The Montréal
Protocol on Substances that Deplete the Ozone Layer (1987) set international target dates
for reducing the manufacture and emissions of the chemicals, such as CFCs, known to
deplete the ozone layer. The Basel Convention on the Control of Transboundary
Movements of Hazardous Wastes and Their Disposal (1989) serves as a framework for the
international regulation of hazardous waste transport and disposal.

Since 1992 representatives from more than 160 nations have met regularly to discuss
methods to reduce greenhouse gas emissions. In 1997 the Kyōto Protocol was devised,
calling for industrialized countries to reduce their gas emissions by 2012 to an average 5
percent below 1990 levels. At the end of 2000 the Kyōto Protocol had not yet been ratified;
negotiators were still working to find consensus on the rules, methods, and penalties that
should be used to enforce the treaty.

Regulations and legislation have led to considerable progress in cleaning up air and water
pollution in developed countries. Vehicles in the 1990s emit fewer nitrogen oxides than
those in the 1970s did; power plants now burn low-sulfur fuels; industrial stacks have
scrubbers to reduce emissions; and lead has been removed from gasoline. Developing
countries, however, continue to struggle with pollution control because they lack clean
technologies and desperately need to improve economic strength, often at the cost of
environmental quality. The problem is compounded by developing countries attracting
foreign investment and industry by offering cheaper labor, cheaper raw materials, and
fewer environmental restrictions. The maquiladoras, assembly plants along the Mexican
side of the Mexico-U.S. border, provide jobs and industry for Mexico but are generally
owned by non-Mexican corporations attracted to the cheap labor and lack of pollution
regulation. As a result, this border region, including the Río Grande, is one of the most
heavily polluted zones in North America. To avoid ecological disaster and increased
poverty, developing countries will require aid and technology from outside nations and
corporations, community participation in development initiatives, and strong
environmental regulations.

Nongovernmental citizen groups have formed at the local, national, and international level
to combat pollution problems worldwide. Many of these organizations provide information
and support for people or organizations traditionally not involved in the decision-making
process. The Pesticide Action Network provides technical information about the effects of
pesticides on farmworkers. The Citizen’s Clearinghouse for Hazardous Waste, established
by veterans of the Love Canal controversy, provides support for communities targeted for
hazardous waste installations. A well-organized, grassroots, environmental justice
movement has arisen to advocate equitable environmental protections. Greenpeace is an
activist organization that focuses international attention on industries and governments
known to contaminate land, sea, or atmosphere with toxic or solid wastes. Friends of the
Earth International is a federation of international organizations that fight environmental
pollution around the world.
Computer
I INTRODUCTION

Computer, machine that performs tasks, such as calculations or electronic communication,


under the control of a set of instructions called a program. Programs usually reside within
the computer and are retrieved and processed by the computer’s electronics. The program
results are stored or routed to output devices, such as video display monitors or printers.
Computers perform a wide variety of activities reliably, accurately, and quickly.

II USES OF COMPUTERS

People use computers in many ways. In business, computers track inventories with bar
codes and scanners, check the credit status of customers, and transfer funds electronically.
In homes, tiny computers embedded in the electronic circuitry of most appliances control
the indoor temperature, operate home security systems, tell the time, and turn
videocassette recorders (VCRs) on and off. Computers in automobiles regulate the flow of
fuel, thereby increasing gas mileage, and are used in anti-theft systems. Computers also
entertain, creating digitized sound on stereo systems or computer-animated features from
a digitally encoded laser disc. Computer programs, or applications, exist to aid every level
of education, from programs that teach simple addition or sentence construction to
programs that teach advanced calculus. Educators use computers to track grades and
communicate with students; with computer-controlled projection units, they can add
graphics, sound, and animation to their communications (see Computer-Aided Instruction).
Computers are used extensively in scientific research to solve mathematical problems,
investigate complicated data, or model systems that are too costly or impractical to build,
such as testing the air flow around the next generation of aircraft. The military employs
computers in sophisticated communications to encode and unscramble messages, and to
keep track of personnel and supplies.

III HOW COMPUTERS WORK

The physical computer and its components are known as hardware. Computer hardware
includes the memory that stores data and program instructions; the central processing unit
(CPU) that carries out program instructions; the input devices, such as a keyboard or
mouse, that allow the user to communicate with the computer; the output devices, such as
printers and video display monitors, that enable the computer to present information to the
user; and buses (hardware lines or wires) that connect these and other computer
components. The programs that run the computer are called software. Software generally
is designed to perform a particular type of task—for example, to control the arm of a robot
to weld a car’s body, to write a letter, to display and modify a photograph, or to direct the
general operation of the computer.
A The Operating System

When a computer is turned on it searches for instructions in its memory. These instructions
tell the computer how to start up. Usually, one of the first sets of these instructions is a
special program called the operating system, which is the software that makes the
computer work. It prompts the user (or other machines) for input and commands, reports
the results of these commands and other operations, stores and manages data, and
controls the sequence of the software and hardware actions. When the user requests that a
program run, the operating system loads the program in the computer’s memory and runs
the program. Popular operating systems, such as Microsoft Windows and the Macintosh
system (Mac OS), have graphical user interfaces (GUIs)—that use tiny pictures, or icons, to
represent various files and commands. To access these files or commands, the user clicks
the mouse on the icon or presses a combination of keys on the keyboard. Some operating
systems allow the user to carry out these tasks via voice, touch, or other input methods.

B Computer Memory

To process information electronically, data are stored in a computer in the form of binary
digits, or bits, each having two possible representations (0 or 1). If a second bit is added to
a single bit of information, the number of representations is doubled, resulting in four
possible combinations: 00, 01, 10, or 11. A third bit added to this two-bit representation
again doubles the number of combinations, resulting in eight possibilities: 000, 001, 010,
011, 100, 101, 110, or 111. Each time a bit is added, the number of possible patterns is
doubled. Eight bits is called a byte; a byte has 256 possible combinations of 0s and 1s. See
also Expanded Memory; Extended Memory.

A byte is a useful quantity in which to store information because it provides enough


possible patterns to represent the entire alphabet, in lower and upper cases, as well as
numeric digits, punctuation marks, and several character-sized graphics symbols, including
non-English characters such as π . A byte also can be interpreted as a pattern that
represents a number between 0 and 255. A kilobyte—1,024 bytes—can store about 1,000
characters; a megabyte can store about 1 million characters; a gigabyte can store about 1
billion characters; and a terabyte can store about 1 trillion characters. Computer
programmers usually decide how a given byte should be interpreted—that is, as a single
character, a character within a string of text, a single number, or part of a larger number.
Numbers can represent anything from chemical bonds to dollar figures to colors to sounds.

The physical memory of a computer is either random access memory (RAM), which can be
read or changed by the user or computer, or read-only memory (ROM), which can be read
by the computer but not altered in any way. One way to store memory is within the
circuitry of the computer, usually in tiny computer chips that hold millions of bytes of
information. The memory within these computer chips is RAM. Memory also can be stored
outside the circuitry of the computer on external storage devices, such as magnetic floppy
disks, which can store about 2 megabytes of information; hard drives, which can store
gigabytes of information; compact discs (CDs), which can store up to 680 megabytes of
information; and digital video discs (DVDs), which can store 8.5 gigabytes of information. A
single CD can store nearly as much information as several hundred floppy disks, and some
DVDs can hold more than 12 times as much data as a CD.

C The Bus

The bus enables the components in a computer, such as the CPU and the memory circuits,
to communicate as program instructions are being carried out. The bus is usually a flat
cable with numerous parallel wires. Each wire can carry one bit, so the bus can transmit
many bits along the cable at the same time. For example, a 16-bit bus, with 16 parallel
wires, allows the simultaneous transmission of 16 bits (2 bytes) of information from one
component to another. Early computer designs utilized a single or very few buses. Modern
designs typically use many buses, some of them specialized to carry particular forms of
data, such as graphics.

D Input Devices

Input devices, such as a keyboard or mouse, permit the computer user to communicate
with the computer. Other input devices include a joystick, a rodlike device often used by
people who play computer games; a scanner, which converts images such as photographs
into digital images that the computer can manipulate; a touch panel, which senses the
placement of a user’s finger and can be used to execute commands or access files; and a
microphone, used to input sounds such as the human voice which can activate computer
commands in conjunction with voice recognition software. “Tablet” computers are being
developed that will allow users to interact with their screens using a penlike device.

E The Central Processing Unit

Information from an input device or from the computer’s memory is communicated via the
bus to the central processing unit (CPU), which is the part of the computer that translates
commands and runs programs. The CPU is a microprocessor chip—that is, a single piece of
silicon containing millions of tiny, microscopically wired electrical components. Information
is stored in a CPU memory location called a register. Registers can be thought of as the
CPU’s tiny scratchpad, temporarily storing instructions or data. When a program is running,
one special register called the program counter keeps track of which program instruction
comes next by maintaining the memory location of the next program instruction to be
executed. The CPU’s control unit coordinates and times the CPU’s functions, and it uses the
program counter to locate and retrieve the next instruction from memory.

In a typical sequence, the CPU locates the next instruction in the appropriate memory
device. The instruction then travels along the bus from the computer’s memory to the CPU,
where it is stored in a special instruction register. Meanwhile, the program counter changes
—usually increasing a small amount—so that it contains the location of the instruction that
will be executed next. The current instruction is analyzed by a decoder, which determines
what the instruction will do. Any data the instruction needs are retrieved via the bus and
placed in the CPU’s registers. The CPU executes the instruction, and the results are stored
in another register or copied to specific memory locations via a bus. This entire sequence
of steps is called an instruction cycle. Frequently, several instructions may be in process
simultaneously, each at a different stage in its instruction cycle. This is called pipeline
processing.

F Output Devices

Once the CPU has executed the program instruction, the program may request that the
information be communicated to an output device, such as a video display monitor or a flat
liquid crystal display. Other output devices are printers, overhead projectors, videocassette
recorders (VCRs), and speakers. See also Input/Output Devices.

IV PROGRAMMING LANGUAGES

Programming languages contain the series of commands that create software. A CPU has a
limited set of instructions known as machine code that it is capable of understanding. The
CPU can understand only this language. All other programming languages must be
converted to machine code for them to be understood. Computer programmers, however,
prefer to use other computer languages that use words or other commands because they
are easier to use. These other languages are slower because the language must be
translated first so that the computer can understand it. The translation can lead to code
that may be less efficient to run than code written directly in the machine’s language.

A Machine Language

Computer programs that can be run by a computer’s operating system are called
executables. An executable program is a sequence of extremely simple instructions known
as machine code. These instructions are specific to the individual computer’s CPU and
associated hardware; for example, Intel Pentium and Power PC microprocessor chips each
have different machine languages and require different sets of codes to perform the same
task. Machine code instructions are few in number (roughly 20 to 200, depending on the
computer and the CPU). Typical instructions are for copying data from a memory location
or for adding the contents of two memory locations (usually registers in the CPU). Complex
tasks require a sequence of these simple instructions. Machine code instructions are binary
—that is, sequences of bits (0s and 1s). Because these sequences are long strings of 0s
and 1s and are usually not easy to understand, computer instructions usually are not
written in machine code. Instead, computer programmers write code in languages known
as an assembly language or a high-level language.

B Assembly Language

Assembly language uses easy-to-remember commands that are more understandable to


programmers than machine-language commands. Each machine language instruction has
an equivalent command in assembly language. For example, in one Intel assembly
language, the statement “MOV A, B” instructs the computer to copy data from location A to
location B. The same instruction in machine code is a string of 16 0s and 1s. Once an
assembly-language program is written, it is converted to a machine-language program by
another program called an assembler.

Assembly language is fast and powerful because of its correspondence with machine
language. It is still difficult to use, however, because assembly-language instructions are a
series of abstract codes and each instruction carries out a relatively simple task. In
addition, different CPUs use different machine languages and therefore require different
programs and different assembly languages. Assembly language is sometimes inserted
into a high-level language program to carry out specific hardware tasks or to speed up
parts of the high-level program that are executed frequently.

C High-Level Languages

High-level languages were developed because of the difficulty of programming using


assembly languages. High-level languages are easier to use than machine and assembly
languages because their commands are closer to natural human language. In addition,
these languages are not CPU-specific. Instead, they contain general commands that work
on different CPUs. For example, a programmer writing in the high-level C++ programming
language who wants to display a greeting need include only the following command:

cout << ‘Hello, Encarta User!’ << endl;

This command directs the computer’s CPU to display the greeting, and it will work no
matter what type of CPU the computer uses. When this statement is executed, the text
that appears between the quotes will be displayed. Although the “cout” and “endl” parts of
the above statement appear cryptic, programmers quickly become accustomed to their
meanings. For example, “cout” sends the greeting message to the “standard output”
(usually the computer user’s screen) and “endl” is how to tell the computer (when using
the C++ language) to go to a new line after it outputs the message. Like assembly-
language instructions, high-level languages also must be translated. This is the task of a
special program called a compiler. A compiler turns a high-level program into a CPU-
specific machine language. For example, a programmer may write a program in a high-
level language such as C++ or Java and then prepare it for different machines, such as a
Sun Microsystems work station or a personal computer (PC), using compilers designed for
those machines. This simplifies the programmer’s task and makes the software more
portable to different users and machines.

V FLOW-MATIC

American naval officer and mathematician Grace Murray Hopper helped develop the first
commercially available high-level software language, FLOW-MATIC, in 1957. Hopper is
credited for inventing the term bug, which indicates a computer malfunction; in 1945 she
discovered a hardware failure in the Mark II computer caused by a moth trapped between
its mechanical relays. She documented the event in her laboratory notebook, and the term
eventually came to represent any computer error, including one based strictly on incorrect
instructions in software. Hopper taped the moth into her notebook and wrote, “First actual
case of a bug being found.”

VI FORTRAN

From 1954 to 1958 American computer scientist John Backus of International Business
Machines, Inc. (IBM) developed Fortran, an acronym for Formula Translation. It became a
standard programming language because it could process mathematical formulas. Fortran
and its variations are still in use today, especially in physics.

VII BASIC

Hungarian-American mathematician John Kemeny and American mathematician Thomas


Kurtz at Dartmouth College in Hanover, New Hampshire, developed BASIC (Beginner’s All-
purpose Symbolic Instruction Code) in 1964. The language was easier to learn than its
predecessors and became popular due to its friendly, interactive nature and its inclusion on
early personal computers. Unlike languages that require all their instructions to be
translated into machine code first, BASIC is turned into machine language line by line as
the program runs. BASIC commands typify high-level languages because of their simplicity
and their closeness to natural human language. For example, a program that divides a
number in half can be written as

10 INPUT “ENTER A NUMBER,” X


20 Y=X/2
30 PRINT “HALF OF THAT NUMBER IS,” Y

The numbers that precede each line are chosen by the programmer to indicate the
sequence of the commands. The first line prints “ENTER A NUMBER” on the computer
screen followed by a question mark to prompt the user to type in the number labeled “X.”
In the next line, that number is divided by two and stored as “Y.” In the third line, the result
of the operation is displayed on the computer screen. Even though BASIC is rarely used
today, this simple program demonstrates how data are stored and manipulated in most
high-level programming languages.

VIII OTHER HIGH-LEVEL LANGUAGES

Other high-level languages in use today include C, C++, Ada, Pascal, LISP, Prolog, COBOL,
Visual Basic, and Java. Some languages, such as the “markup languages” known as HTML,
XML, and their variants, are intended to display data, graphics, and media selections,
especially for users of the World Wide Web. Markup languages are often not considered
programming languages, but they have become increasingly sophisticated.
A Object-Oriented Programming Languages

Object-oriented programming (OOP) languages, such as C++ and Java, are based on
traditional high-level languages, but they enable a programmer to think in terms of
collections of cooperating objects instead of lists of commands. Objects, such as a circle,
have properties such as the radius of the circle and the command that draws it on the
computer screen. Classes of objects can inherit features from other classes of objects. For
example, a class defining squares can inherit features such as right angles from a class
defining rectangles. This set of programming classes simplifies the programmer’s task,
resulting in more “reusable” computer code. Reusable code allows a programmer to use
code that has already been designed, written, and tested. This makes the programmer’s
task easier, and it results in more reliable and efficient programs.

IX TYPES OF COMPUTERS
A Digital and Analog

Computers can be either digital or analog. Virtually all modern computers are digital.
Digital refers to the processes in computers that manipulate binary numbers (0s or 1s),
which represent switches that are turned on or off by electrical current. A bit can have the
value 0 or the value 1, but nothing in between 0 and 1. Analog refers to circuits or
numerical values that have a continuous range. Both 0 and 1 can be represented by analog
computers, but so can 0.5, 1.5, or a number like π (approximately 3.14).

A desk lamp can serve as an example of the difference between analog and digital. If the
lamp has a simple on/off switch, then the lamp system is digital, because the lamp either
produces light at a given moment or it does not. If a dimmer replaces the on/off switch,
then the lamp is analog, because the amount of light can vary continuously from on to off
and all intensities in between.

Analog computer systems were the first type to be produced. A popular analog computer
used in the 20th century was the slide rule. To perform calculations with a slide rule, the
user slides a narrow, gauged wooden strip inside a rulerlike holder. Because the sliding is
continuous and there is no mechanism to stop at any exact values, the slide rule is analog.
New interest has been shown recently in analog computers, particularly in areas such as
neural networks. These are specialized computer designs that attempt to mimic neurons of
the brain. They can be built to respond to continuous electrical signals. Most modern
computers, however, are digital machines whose components have a finite number of
states—for example, the 0 or 1, or on or off bits. These bits can be combined to denote
information such as numbers, letters, graphics, sound, and program instructions.

B Range of Computer Ability

Computers exist in a wide range of sizes and power. The smallest are embedded within the
circuitry of appliances, such as televisions and wristwatches. These computers are typically
preprogrammed for a specific task, such as tuning to a particular television frequency,
delivering doses of medicine, or keeping accurate time. They generally are “hard-wired”—
that is, their programs are represented as circuits that cannot be reprogrammed.

Programmable computers vary enormously in their computational power, speed, memory,


and physical size. Some small computers can be held in one hand and are called personal
digital assistants (PDAs). They are used as notepads, scheduling systems, and address
books; if equipped with a cellular phone, they can connect to worldwide computer networks
to exchange information regardless of location. Hand-held game devices are also examples
of small computers.

Portable laptop and notebook computers and desktop PCs are typically used in businesses
and at home to communicate on computer networks, for word processing, to track
finances, and for entertainment. They have large amounts of internal memory to store
hundreds of programs and documents. They are equipped with a keyboard; a mouse,
trackball, or other pointing device; and a video display monitor or liquid crystal display
(LCD) to display information. Laptop and notebook computers usually have hardware and
software similar to PCs, but they are more compact and have flat, lightweight LCDs instead
of television-like video display monitors. Most sources consider the terms “laptop” and
“notebook” synonymous.

Workstations are similar to personal computers but have greater memory and more
extensive mathematical abilities, and they are connected to other workstations or personal
computers to exchange data. They are typically found in scientific, industrial, and business
environments—especially financial ones, such as stock exchanges—that require complex
and fast computations.

Mainframe computers have more memory, speed, and capabilities than workstations and
are usually shared by multiple users through a series of interconnected computers. They
control businesses and industrial facilities and are used for scientific research. The most
powerful mainframe computers, called supercomputers, process complex and time-
consuming calculations, such as those used to create weather predictions. Large
businesses, scientific institutions, and the military use them. Some supercomputers have
many sets of CPUs. These computers break a task into small pieces, and each CPU
processes a portion of the task to increase overall speed and efficiency. Such computers
are called parallel processors. As computers have increased in sophistication, the
boundaries between the various types have become less rigid. The performance of various
tasks and types of computing have also moved from one type of computer to another. For
example, networked PCs can work together on a given task in a version of parallel
processing known as distributed computing.

X NETWORKS

Computers can communicate with other computers through a series of connections and
associated hardware called a network. The advantage of a network is that data can be
exchanged rapidly, and software and hardware resources, such as hard-disk space or
printers, can be shared. Networks also allow remote use of a computer by a user who
cannot physically access the computer.

One type of network, a local area network (LAN), consists of several PCs or workstations
connected to a special computer called a server, often within the same building or office
complex. The server stores and manages programs and data. A server often contains all of
a networked group’s data and enables LAN workstations or PCs to be set up without large
storage capabilities. In this scenario, each PC may have “local” memory (for example, a
hard drive) specific to itself, but the bulk of storage resides on the server. This reduces the
cost of the workstation or PC because less expensive computers can be purchased, and it
simplifies the maintenance of software because the software resides only on the server
rather than on each individual workstation or PC.

Mainframe computers and supercomputers commonly are networked. They may be


connected to PCs, workstations, or terminals that have no computational abilities of their
own. These “dumb” terminals are used only to enter data into, or receive output from, the
central computer.

Wide area networks (WANs) are networks that span large geographical areas. Computers
can connect to these networks to use facilities in another city or country. For example, a
person in Los Angeles can browse through the computerized archives of the Library of
Congress in Washington, D.C. The largest WAN is the Internet, a global consortium of
networks linked by common communication programs and protocols (a set of established
standards that enable computers to communicate with each other). The Internet is a
mammoth resource of data, programs, and utilities. American computer scientist Vinton
Cerf was largely responsible for creating the Internet in 1973 as part of the United States
Department of Defense Advanced Research Projects Agency (DARPA). In 1984 the
development of Internet technology was turned over to private, government, and scientific
agencies. The World Wide Web, developed in the 1980s by British physicist Timothy
Berners-Lee, is a system of information resources accessed primarily through the Internet.
Users can obtain a variety of information in the form of text, graphics, sounds, or video.
These data are extensively cross-indexed, enabling users to browse (transfer their
attention from one information site to another) via buttons, highlighted text, or
sophisticated searching software known as search engines.

XI HISTORY
A Beginnings

The history of computing began with an analog machine. In 1623 German scientist Wilhelm
Schikard invented a machine that used 11 complete and 6 incomplete sprocketed wheels
that could add, and with the aid of logarithm tables, multiply and divide.

French philosopher, mathematician, and physicist Blaise Pascal invented a machine in


1642 that added and subtracted, automatically carrying and borrowing digits from column
to column. Pascal built 50 copies of his machine, but most served as curiosities in parlors of
the wealthy. Seventeenth-century German mathematician Gottfried Leibniz designed a
special gearing system to enable multiplication on Pascal’s machine.

B First Punch Cards

In the early 19th century French inventor Joseph-Marie Jacquard devised a specialized type
of computer: a silk loom. Jacquard’s loom used punched cards to program patterns that
helped the loom create woven fabrics. Although Jacquard was rewarded and admired by
French emperor Napoleon I for his work, he fled for his life from the city of Lyon pursued by
weavers who feared their jobs were in jeopardy due to Jacquard’s invention. The loom
prevailed, however: When Jacquard died, more than 30,000 of his looms existed in Lyon.
The looms are still used today, especially in the manufacture of fine furniture fabrics.

C Precursor to Modern Computer

Another early mechanical computer was the Difference Engine, designed in the early 1820s
by British mathematician and scientist Charles Babbage. Although never completed by
Babbage, the Difference Engine was intended to be a machine with a 20-decimal capacity
that could solve mathematical problems. Babbage also made plans for another machine,
the Analytical Engine, considered the mechanical precursor of the modern computer. The
Analytical Engine was designed to perform all arithmetic operations efficiently; however,
Babbage’s lack of political skills kept him from obtaining the approval and funds to build it.

Augusta Ada Byron, countess of Lovelace, was a personal friend and student of Babbage.
She was the daughter of the famous poet Lord Byron and one of only a few woman
mathematicians of her time. She prepared extensive notes concerning Babbage’s ideas
and the Analytical Engine. Lovelace’s conceptual programs for the machine led to the
naming of a programming language (Ada) in her honor. Although the Analytical Engine was
never built, its key concepts, such as the capacity to store instructions, the use of punched
cards as a primitive memory, and the ability to print, can be found in many modern
computers.

XII DEVELOPMENTS IN THE 20TH CENTURY


A Early Electronic Calculators
Herman Hollerith, an American inventor, used an idea similar to Jacquard’s loom when he
combined the use of punched cards with devices that created and electronically read the
cards. Hollerith’s tabulator was used for the 1890 U.S. census, and it made the
computational time three to four times shorter than the time previously needed for hand
counts. Hollerith’s Tabulating Machine Company eventually merged with two companies to
form the Computing-Tabulating-Recording Company. In 1924 the company changed its
name to International Business Machines (IBM).
In 1936 British mathematician Alan Turing proposed the idea of a machine that could
process equations without human direction. The machine (now known as a Turing machine)
resembled an automatic typewriter that used symbols for math and logic instead of letters.
Turing intended the device to be a “universal machine” that could be used to duplicate or
represent the function of any other existing machine. Turing’s machine was the theoretical
precursor to the modern digital computer. The Turing machine model is still used by
modern computational theorists.

In the 1930s American mathematician Howard Aiken developed the Mark I calculating
machine, which was built by IBM. This electronic calculating machine used relays and
electromagnetic components to replace mechanical components. In later machines, Aiken
used vacuum tubes and solid state transistors (tiny electrical switches) to manipulate the
binary numbers. Aiken also introduced computers to universities by establishing the first
computer science program at Harvard University in Cambridge, Massachusetts. Aiken
obsessively mistrusted the concept of storing a program within the computer, insisting that
the integrity of the machine could be maintained only through a strict separation of
program instructions from data. His computer had to read instructions from punched cards,
which could be stored away from the computer. He also urged the National Bureau of
Standards not to support the development of computers, insisting that there would never
be a need for more than five or six of them nationwide.

B EDVAC, ENIAC, and UNIVAC

At the Institute for Advanced Study in Princeton, New Jersey, Hungarian-American


mathematician John von Neumann developed one of the first computers used to solve
problems in mathematics, meteorology, economics, and hydrodynamics. Von Neumann's
1945 design for the Electronic Discrete Variable Automatic Computer (EDVAC)—in stark
contrast to the designs of Aiken, his contemporary—was the first electronic computer
design to incorporate a program stored entirely within its memory. This machine led to
several others, some with clever names like ILLIAC, JOHNNIAC, and MANIAC.

American physicist John Mauchly proposed the electronic digital computer called ENIAC,
the Electronic Numerical Integrator And Computer. He helped build it along with American
engineer John Presper Eckert, Jr., at the Moore School of Engineering at the University of
Pennsylvania in Philadelphia. ENIAC was operational in 1945 and introduced to the public in
1946. It is regarded as the first successful, general digital computer. It occupied 167 sq m
(1,800 sq ft), weighed more than 27,000 kg (60,000 lb), and contained more than 18,000
vacuum tubes. Roughly 2,000 of the computer’s vacuum tubes were replaced each month
by a team of six technicians. Many of ENIAC’s first tasks were for military purposes, such as
calculating ballistic firing tables and designing atomic weapons. Since ENIAC was initially
not a stored program machine, it had to be reprogrammed for each task.

Eckert and Mauchly eventually formed their own company, which was then bought by the
Rand Corporation. They produced the Universal Automatic Computer (UNIVAC), which was
used for a broader variety of commercial applications. The first UNIVAC was delivered to
the United States Census Bureau in 1951. By 1957, there were 46 UNIVACs in use.
Between 1937 and 1939, while teaching at Iowa State College, American physicist John
Vincent Atanasoff built a prototype computing device called the Atanasoff-Berry Computer,
or ABC, with the help of his assistant, Clifford Berry. Atanasoff developed the concepts that
were later used in the design of the ENIAC. Atanasoff’s device was the first computer to
separate data processing from memory, but it is not clear whether a functional version was
ever built. Atanasoff did not receive credit for his contributions until 1973, when a lawsuit
regarding the patent on ENIAC was settled.

THE TRANSISTOR AND INTEGRATED CIRCUITS TRANSFORM


XIII COMPUTING

In 1948, at Bell Telephone Laboratories, American physicists Walter Houser Brattain, John
Bardeen, and William Bradford Shockley developed the transistor, a device that can act as
an electric switch. The transistor had a tremendous impact on computer design, replacing
costly, energy-inefficient, and unreliable vacuum tubes.

In the late 1960s integrated circuits (tiny transistors and other electrical components
arranged on a single chip of silicon) replaced individual transistors in computers. Integrated
circuits resulted from the simultaneous, independent work of Jack Kilby at Texas
Instruments and Robert Noyce of the Fairchild Semiconductor Corporation in the late
1950s. As integrated circuits became miniaturized, more components could be designed
into a single computer circuit. In the 1970s refinements in integrated circuit technology led
to the development of the modern microprocessor, integrated circuits that contained
thousands of transistors. Modern microprocessors can contain more than 40 million
transistors.

Manufacturers used integrated circuit technology to build smaller and cheaper computers.
The first of these so-called personal computers (PCs)—the Altair 8800—appeared in 1975,
sold by Micro Instrumentation Telemetry Systems (MITS). The Altair used an 8-bit Intel
8080 microprocessor, had 256 bytes of RAM, received input through switches on the front
panel, and displayed output on rows of light-emitting diodes (LEDs). Refinements in the PC
continued with the inclusion of video displays, better storage devices, and CPUs with more
computational abilities. Graphical user interfaces were first designed by the Xerox
Corporation, then later used successfully by Apple Computer, Inc.. Today the development
of sophisticated operating systems such as Windows, the Mac OS, and Linux enables
computer users to run programs and manipulate data in ways that were unimaginable in
the mid-20th century.

Several researchers claim the “record” for the largest single calculation ever performed.
One large single calculation was accomplished by physicists at IBM in 1995. They solved
one million trillion mathematical subproblems by continuously running 448 computers for
two years. Their analysis demonstrated the existence of a previously hypothetical
subatomic particle called a glueball. Japan, Italy, and the United States are collaborating to
develop new supercomputers that will run these types of calculations 100 times faster.
In 1996 IBM challenged Garry Kasparov, the reigning world chess champion, to a chess
match with a supercomputer called Deep Blue. The computer had the ability to compute
more than 100 million chess positions per second. In a 1997 rematch Deep Blue defeated
Kasparov, becoming the first computer to win a match against a reigning world chess
champion with regulation time controls. Many experts predict these types of parallel
processing machines will soon surpass human chess playing ability, and some speculate
that massive calculating power will one day replace intelligence. Deep Blue serves as a
prototype for future computers that will be required to solve complex problems. At issue,
however, is whether a computer can be developed with the ability to learn to solve
problems on its own, rather than one programmed to solve a specific set of tasks.

XIV THE FUTURE OF COMPUTERS

In 1965 semiconductor pioneer Gordon Moore predicted that the number of transistors
contained on a computer chip would double every year. This is now known as Moore’s Law,
and it has proven to be somewhat accurate. The number of transistors and the
computational speed of microprocessors currently doubles approximately every 18
months. Components continue to shrink in size and are becoming faster, cheaper, and
more versatile.

With their increasing power and versatility, computers simplify day-to-day life.
Unfortunately, as computer use becomes more widespread, so do the opportunities for
misuse. Computer hackers—people who illegally gain access to computer systems—often
violate privacy and can tamper with or destroy records. Programs called viruses or worms
can replicate and spread from computer to computer, erasing information or causing
malfunctions. Other individuals have used computers to electronically embezzle funds and
alter credit histories (see Computer Security). New ethical issues also have arisen, such as
how to regulate material on the Internet and the World Wide Web. Long-standing issues,
such as privacy and freedom of expression, are being reexamined in light of the digital
revolution. Individuals, companies, and governments are working to solve these problems
through informed conversation, compromise, better computer security, and regulatory
legislation.

Computers will become more advanced and they will also become easier to use. Improved
speech recognition will make the operation of a computer easier. Virtual reality, the
technology of interacting with a computer using all of the human senses, will also
contribute to better human and computer interfaces. Standards for virtual-reality program
languages—for example, Virtual Reality Modeling language (VRML)—are currently in use or
are being developed for the World Wide Web.

Other, exotic models of computation are being developed, including biological computing
that uses living organisms, molecular computing that uses molecules with particular
properties, and computing that uses deoxyribonucleic acid (DNA), the basic unit of
heredity, to store data and carry out operations. These are examples of possible future
computational platforms that, so far, are limited in abilities or are strictly theoretical.
Scientists investigate them because of the physical limitations of miniaturizing circuits
embedded in silicon. There are also limitations related to heat generated by even the
tiniest of transistors.

Intriguing breakthroughs occurred in the area of quantum computing in the late 1990s.
Quantum computers under development use components of a chloroform molecule (a
combination of chlorine and hydrogen atoms) and a variation of a medical procedure called
magnetic resonance imaging (MRI) to compute at a molecular level. Scientists use a branch
of physics called quantum mechanics, which describes the behavior of subatomic particles
(particles that make up atoms), as the basis for quantum computing. Quantum computers
may one day be thousands to millions of times faster than current computers, because
they take advantage of the laws that govern the behavior of subatomic particles. These
laws allow quantum computers to examine all possible answers to a query simultaneously.
Future uses of quantum computers could include code breaking (see cryptography) and
large database queries. Theorists of chemistry, computer science, mathematics, and
physics are now working to determine the possibilities and limitations of quantum
computing.

Communications between computer users and networks will benefit from new technologies
such as broadband communication systems that can carry significantly more data faster or
more conveniently to and from the vast interconnected databases that continue to grow in
number and type.

Contributed By:
Timothy Law Snyder
Computer

Since the beginning of the Industrial Revolution, people wanted and needed an easier
way of calculating and measuring. Through the dreams of Charles Babbage, the
computer was born. These new machines could do any regular math more than twice
as fast as any human. Sadly, these ideas were not appreciated until almost one
hundred years later. In the 1950's, the idea of computers was broght up again. This is
when people finally started crediting Babbage's work. The technology available now
made it possible for people to construct a digital computer. The idea of building a
computer became a necessity when World War II came about. Many important names
such as ENIAC and IBM cam about, and computers became a very wide interest in
the world. What influenced personal computers of today? Computers The thought of a
machine being more intelligent than a mathmetician was laughed at, and thought of as
an impossibility. That all changed when Charles Babbage was brought into the world.
Charles Babbage was a mathmetician, engineer, and a future computer designer. He
was actually known as the Grandfather (Slater 3) of the modern day computer. He
was and still is, thought of as ahead of his time. Charles Babbage entered Trinity
College, Cambridge in 1810. There he studied mathematics and chemistry. Between
1815 and 1820, he was involved mostly in mathematics, studying algebra. In 1822, he
finally built his first mechanical computer, the Difference Engine. This was the first
ever mechanical computer. It could add, subtract, divide, and multiply. He then
started working on a more advanced machine in 1834, the Analytical Engine. This
would be much more advanced than the Difference Engine. It would be steam
powered and fully automatic. This would have been his greatest achievement.
Unfortunately, the technology available to him was not advanced enough for Babbage
to build, what would have been, the first digital computer. Another reason he never
built his Analytical Engine was because Szczur 3 he almost never completely finished
a project, as he was obsessed with perfection. Charles Babbage died in 1871. Sadly,
he was forgotten for seventy years, until the computer revolution. In the 1940's, he
finally got recognition for his ideas. Some of the first digital computers were in fact
very similar to his plans of the Analytical Engine. Technology was indeed very much
better in the 1940's, a perfect environment for the digital computer to be born. The
vacuum tube was already used widely in all types of electronic devices, including
televisions and stereos. When they are activated, however, people found that large
numbers of them could represent zeros and ones, which would be computer code. The
government was the only place to go to get a grant for this project. The government at
first wanted nothing to do with computers. Basically, computers were thought of as
too radical an idea, and that there was no need for them. This view was brought about
mostly by the military. However, this view would soon change. When World War II
started, the computer had new hope. New weapons were designed which required
trajectory tables for guidance. Also, Russia had detonated an atomic bomb. Tracking
enemy and friendly planes as well as guiding missiles Szczur 4 and bombs to their
targets was extremely difficult work for people. And, with the detonation of the
Russian atomic bomb, Enemy targets had to be neutralized much quicker. In other
words, people were too slow. A computer would do all of these jobs much quicker
and more precise than humans. In 1949, the U.S. Military gave in. Construction of
this new computer was started immediately. When it was finished and operational on
September 7, 1952, it was named the ENIAC, meaning Electronic Numerical
Integrator And Computer. It used 17,468 vacuum tubes, covering three gigantic walls,
for data storage. It was very much quicker than any human mathmetician. It could
monitor 47 airplanes at once, while performing other various tasks. There was only
one major flaw. Every time a new operation was started, all of the tubes had to be
reset, which meant they all had to be unplugged and plugged again. New technology
was already being developed which made the ENIAC look sluggish. For this reason, a
newer computer was built. In 1945, the EDVAC was operational. EDVAC stands for
Electronic Discrete Variable Automatic Computer. The EDVAC made use of 4000
vacuum tubes and 10000 crystallized diodes. There was an improvement on speed
over the ENIAC, but the main Szczur 5 evolution was that the EDVAC could store
data for long periods of time. This computer was first used on April 20,1951, and was
used until 1983. Project Whirlwind was the really drastic milestone. This was not true
at first, however. At first it only consisted of advanced vacuum tubes. The basic
vacuum tube had a life of 500 hours. Whirlwind's, however, used a silicon-free
cathode. This decreased the cathode emission to barely nothing. This, in turn,
increased the life of a vacuum tube to 5,000 hours. This meant less time and money
spent on replacing vacuum tubes. It was still unreliable and would break down. Also,
although it's vacuum tubes were very much more efficient, they had to be replaced
often. Computers needed a new form of memory storage. Finally, the big
breakthrough occurred. A theory of magnetic coils was introduced. A special ferrite
metal was used for these coils, which now were metal rings. Ferrite was much less
expensive than vacuum tubes. Also, these coils could store information as long as
needed. For vacuum tubes to do this, they had to have a constant supply of electricity.
The coils, however, did not. A man named Jay Forrester proved this theory very
useful. He strung many of these coils shaped like donuts on a wire grid. Each coil had
its own location on the grid, and could be accessed and used much quicker than
vacuum tubes. When a vacuum tube was Szczur 6 turned on, it represented a one, and
when off, a zero. The ferrite coils worked in much the same way. When charged north
a one, and south a zero. This was named Random-Access Coincident-Current
Memory. It more than doubled operating speed. This prototype was improved, and
received a shorter name, RAM. This stands for just Random Access Memory. This
type of memory is used and then reset for its next instructions. Other important
advances came about around this time. One of these was time sharing. Many people
wanted to use these new computers. but they were too expensive and large to fit in a
house. In time sharing, a computer would process many problems at one time, using
EXE files. An EXE file is an executive file. Another method of making computers
quicker and more available to the public was batch processing. In batch processing,
problems are prepared and held on magnetic drums, disk packs, or tapes. After they
are solved, the results are displayed or printed. The problem is then deleted, or
dumped to make room for the next problem. With all of these new advances, the
computer was becoming easier to use. Also, with the new ferrite coils and other
smaller parts, the computer was shrinking very much. Soon they would be affordable
and small enough for every household to have one. Szczur 7 This all became possible
with two simple inventions: The transistor and microprocessor. The transistor worked
like a vacuum tube. However, it was much smaller and much more affordable. A
transistor consists of only a plastic casing containing three fine wire strands. A
vacuum tube is an electrical valve used to control the flow of electricity and
amplification of electric signals. They are about the size of a light bulb, and cost about
one dollar. The transistor does everything a vacuum tube does, except it only costs
about five cents, and is about as big as a fingertip. It in fact also holds more
information. The transistor increased RAM amounts in computers from 8,000 to
64,000 words. Eventually, they were also made incredibly small, allowing hundreds
of transistors to be placed on one chip. They were faster, too. Transistors increased
computer accessing speed from three to two milliseconds. By 1980, transistors were
so small that hundreds of thousands could be placed on one chip, about the size of a
fingernail. Transistors made computers small enough to fit on a desktop. However,
there was no way of controling this. In older computers, a central processing unit was
used. But these units were very large and extremely expensive. Thus, the birth of the
microprocessor. Intel is the company that created the first microprocessor. It was
named the 4004 microprocessor. This microprocessor (as with all Szczur 8
microprocessors today) ran on ROM. ROM stands for Read Only Memory. ROM
stores constantly used, unchanging memory. The Microprocessor controls all the
functions of the computer. There were many improvements over the 4004, and
eventually, by the late 1980's, some personal computers run by microprocessors could
run 4,000,000 instructions per second. However, not many poeple had computers.
Sure, they were very powerful, small enough, and affordable enough. But what could
they really do? In order for these new machines to come into the household, they
needed something interesting to do. '... People don't realize that frivolity is the
gateway to the future, in that most future products don't start as necessities, but toys.'
(qtd. in Slater 300) These are the words of Nolan Bushnell. He is the founder of Atari,
a big company that manufactured arcade games. Atari means Watch out or i'm going
to get you on the next turn. (Slater 301) Atari eventually got into the home computer
business when it created home video games. This was the key to getting computers
into homes... fun. By 1985, two out of five houses who had a television also had a
computer. Eventually, this allowed personal computers advance into what they are
today. Todays computers are incredibly highly advanced. They run on new operating
systems, such as Windows 95, Windows NT, and Windows 98. These Szczur 9 OS's
are so incredibly simple so that virtually anyone can use a computer. They have CD
quality sound systems and many accessories, peripherals, and programs. Modern day
computers have peripherals such as the mouse, joystick, etc. They make use of
modems and phonelines to communicate with other computers, and can also literally
give the user access to anywhere in the world via the internet. Some computers use
Pentium technology, making clock and access speeds even quicker, Many of these
also make use of new MMX technology, making graphics and video speed and quality
better. Processors have gotten so fast that they are now clocked in megahertz. (one
million hertz) by which it reads ROM and RAM. Computers now can use timing and
voice recognition. This means you can control your whole house by linking it with
your computer. Your entire house can be completely automated. Computers today
also use highly advanced forms of input. One of the newest forms is the CD-ROM,
and even newer, DVD-ROM. CD-ROM stands for Compact Disk Read Only
Memory. these disks can hold up to 600 megabytes of information, and are much
quicker than regular floppy disks. A CD-ROM consists of a grooved metal ultra-thin
sheet enclosed in hard plastic, in which case a grove stands for a one, and a flat area a
zero. DVD-ROM is even newer than this. They Szczur 10 are advanced CD-ROMs,
as they hold twice as much information. (1.2 gigabytes) These DVD-ROMs can also
hold full-length movies in perfect quality. Nearly anything can be stored on a
computer, such as pictures, sound, movies, etc. These files are either stored by
internal or external means. When Information is stored internally, it is stored on a
Hard Disk, which is like a regular disk, but much larger storage and quicker access
speed. It is also slightly larger than a regular disk. These drives store so much
information sometimes that their capacity must registered in gigabytes. To put it in
perspective, One gigabyte is equal to 1,000 megabytes. One megabyte is equal to
1,000 kilobytes. One kilobyte is equal to 1,000 bytes, which can be represented by
letters and numbers. Conclusion The computer started in the thoughts of Charles
Babbage. He created the mechanical computer, the Difference Engine. He was
working on a new, more advanced computer, but he died before it could be
completed. In the 1940's, the interest for computers was re-sparked. With newer
technology and the creation of the vacuum tube, the Eniac was created. It was an
incredibly large machine which filled an entire office floor. New advancements were
made which made computers smaller and smaller. Eventually they were small enough
to fit in the home, with the invention of the transistor and microprocessor. They were
brought into homes with the Atari game system, and computers were a large success.
Nowadays, computers 100 times faster than the ENIAC can fit on a desktop. They can
be linked together through phone wires and modems, and can store millions of times
more information than any computers of the 1940's and 1950's. Computers are a very
big part of everyday life, and it all started with one little dream of a regular man,
Charles Babbage. Bibliography Slater, Robert. Portraits in Silicon. London: The MIT
Press, 1987 Computer History Association of California. http://www.chac.org/chac
HCS Virtual Computer History Museum.
http://www.asap.unime1b.edu.au/hstm/data/337.htm Perspectives of the Smithsonian:
Smithsonian Computer History
http://www.si.edu/resource/tours/comphist/computer.htm Word Count: 2216

You might also like