You are on page 1of 734

GUIDE TO SKEPTICISM Applying laws of physics to faster-thanlight travel, psychic phenomena, telepathy, time travel, UFO's, and other

pseudoscientific claims MILTON A. ROTHMAN PROMETHEUS BOOKS Buffalo, New York A PHYSICIST'S GUIDE TO SKEPTICISM. Copyright 1988 by Milton A. Rothman. Printed in the United States of America. No part of this book may be used or reproduced in any manner whatsoever without written permission, except in the case of brief quotations

embodied in critical articles and reviews. Inquiries should be addressed to Prometheus Books, 700 East Amherst Street, Buffalo, New York 14215. 91 90 89 88 4321 Library of Congress Cataloging-inPublication Data Rothman, Milton A. A physicist's guide to skepticism 1. SciencePhilosophy. 2. Physics. 3. Reality. 4. Skepticism. I. Title. Q175.R5648 1988 501 88-4077 ISBN 0-87975-440-0

To Miriam

INTRODUCTION This book is philosophy of science as understood by an experimental physicist, written for the nonspecialist. It contains a minuscule amount of mathematics and no symbolic logic. Thus, it follows in the tradition of Percy Bridgman and Peter Medawar.1,2 The underlying theme of this book is: How do we trace the boundary between fantasy and reality? This question is not merely hypothetical; successful existence in the real world requires a good ability to define this boundary with some accuracy. Because illusions and hallucinations abound, rational man has been forced to create a system of elaborate mechanisms

and methods which aid in recognizing reality and separating it from the world of fantasy. These methods make up the system of knowledge called science. Thinking outside of science's domain invariably involves a large measure of fantasy. Disciplines that do not incorporate reality-testing into their methods are non-sciences. The philosopher Mario Bunge has defined a number of criteria that must be met by a science to distinguish it from nonscience.3,4 I will paraphrase some of these criteria: 1. A science deals with real entities in space and time.

2. A science has a philosophic outlook according to which the real world consists of lawfully changing concrete things (as opposed to unchanging, lawless, ghostly things) described by a realistic theory of knowledge rather than an idealistic theory. (By an idealistic theory we mean a theory in which ultimate reality consists of the immediate perceptions of our minds and the world outside the mind is nothing but inference. A realistic theory reverses this position, holding that real things are out there in the world and that we infer their nature on the basis of signals collected by our brains.) 3. The contents of a science change over time as a result of the accretion of new knowledge. This knowledge (as opposed

to eternal verities handed down by higher authority) is acquired through research. The regular increase in validated knowledge is one of the most reliable measures of a science. 4. The members of the scientific community are specially trained, communicate information among themselves, and carry on a tradition of free inquiry. 5. Theories in science are logical or mathematical (as opposed to theories that are empty or formal). 6. A science has a fund of knowledge consisting of up-to-date and testable theories (which may or may not be final),

hypotheses, and experimental data. 7.The aims of a science include the systematizing of data and hypotheses into theories and laws, followed by the use of these laws to make specific predictions about the workings of natural systems and man-made devices. The ability of a science to make specific predictions is central to our confidence in its validity. If a system of theories cannot make predictions about observable events, then there is no reason to believe in it and it is not a science. The first two criteria for a science given above explicitly imply the principle of reductionism. Reductionism is the

philosophical position that the structure and behavior of all objects (including living things) can be reduced to the laws governing the behavior of the fundamental particles out of which everything is built. However, we are far from being able to make specific predictions about the behavior of living things starting with the known laws governing the motion of electrons and protons, because structures involved are simply too complicated. Reductionism, when applied to the biological sciences, raises so many difficulties that a number of scientists doubt its validity. But either the laws of physics apply to living things or there are special laws, forces, or forms of energy that act only within living beings. Those who assert the existence of special forces

attempt thereby to introduce vitalism and other religious concepts into scientific philosophy; those who deny them maintain science on a strictly materialistic level, in accordance with the criteria above. The opponents of reductionism base their arguments upon the fact that we are unable to show how the laws of physics explain the assemblage of fundamental particles into living organisms. Therefore, they argue, other laws that apply specifically to those living systems must be used. In this book I will show that our thinking about the laws of nature can be simplified by dividing these laws into two major classes laws of permission and laws of denial:

1. Laws of permission are those that enable us to predict what things are likely to happen to a system under a given set of circumstances. In classical mechanics, Newton's second law of motion and Hamilton's equations are representative of such laws. In quantum mechanics, the Schroedinger and Dirac equations are among the many recipes for predicting the fate of a system. 2. Laws of denial are rules that tell us what cannot happen to a system of objects. Such laws are known to physicists as "symmetry principles": space symmetry, time symmetry, and Lorentz symmetry are the most prominent among them. They are alternative, abstract ways of stating the classic laws more familiarly known as

conservation of momentum, conservation of energy, and the principle of relativity. All events that take place in the universe must obey these laws; that is, they must follow these symmetries, which very precisely separate the class of events that may happen from that that is forbidden. Hence the appellation "laws of denial." As we will see in Chapter 5, predictions made from the laws of permission may often be quite imprecise. Some systems even very simple onesmay be so chaotic that predictions of their motion are completely impossible. On the other hand, predictions made with the laws of denial are always exceedingly precise and unequivocal. They put strict limits on what is allowed to happen. They permit us to

use the word impossible with great confidence, in spite of protests from those who would like to believe that "anything is possible." In this book I survey the reasons for believing that the laws of denial describe nature to a very high degree of precision and explore the manner in which these laws define the boundary between the possible and the impossible, between fantasy and reality. It will become apparent that even science requires the use of some fantasy, so that even under the best of circumstances there is some uncertainty as to what is fantasy and what is reality. But we will also see that the aim of science is to reduce this uncertainty to a minimum.

Understanding the laws of denial gives a logical basis for skepticism. Pseudosciences make claims of extrasensory perception, psychic energy, poltergeists, unidentified flying objects, and other exotic phenomena. It requires the knowledge of only one basic principle of science, namely, the law of conservation of energy, to justify a position of extreme skepticism toward these claims. Conservation of energy, as we shall see in Chapter 4, is experimentally verified to a degree of precision that makes it one of the most firmly grounded and solid pieces of knowledge in history. With such a permanent bit of knowledge in hand, we can justify our skepticism toward claims for phenomena that purport to do what in

reality cannot be done. One word of caution for the reader with little background in physics: Chapters 2, 3, and 4 are a quick summary of essential physical fundamentals. Those who have not read much in the way of modern physics may find it heavy going. I encourage you to persevere. Chapter 5 is the heart of the book, and the chapters after that deal with matters more philosophical and less technical. Notes 1. P. W. Bridgman, The Nature of Physical Theory (Princeton, N.J.: Princeton University Press, 1936).

2. P. B. Medawar, The Limits of Science (New York: Harper & Row, 1984). 3. M. Bunge, Understanding the World (Dordrecht: D. Reidel, 1983). 4. M. Bunge, Skeptical Inquirer, 9 (Fall 1984): 36.

I BELIEFS AND DISPUTATIONS 1. Everybody's a philosopher Everyday anarchy romps through the current intellectual scene: an engineer writes books on evolution, a science fiction writer becomes a psychotherapy guru and founds a new religion, a psychoanalyst rewrites the laws of celestial mechanics, theologians give pronouncements on physics, physicists write books on theology, and legislators write laws defining life. Within this confusion a few fundamentals remain constant: a. A strong belief is more important than a

few facts. b. The stronger the belief, the fewer the facts. c. The fewer the facts, the more people killed. Recall the image of Jacob Bronowski in his historic television show, The Ascent of Man, standing in a pool at Auschwitz, dredging up a handful of mud possibly containing the ashes of his parents, describing how factless theories believed with total certainty by the Nazis resulted in the death of 50 million humans throughout the world during the period from 1939 to 1945.1

If disputes between facts and beliefs were easy to decide, Galileo would not have been threatened for claiming that the earth moved. He had the correct fact in hand, but was powerless to change the beliefs of others with that fact. His adversaries, on the other hand, had only beliefs and powers. Have matters improved since 1600? To the extent that our lives no longer literally depend on our cosmological theories, we are better off. Nobody is in fear of torture and execution for advocating evolution or the big-bang theory (at least in countries of the West). However, the fortunes of textbook writers still depend to some extent on the way they treat evolution in their writings. In addition, the livelihoods

of all of us depend on economic theories held by elected officials, theories which often assume the character of theological passions. In the years following 1980, the American public was subjected to an economic experiment intended to test the theory that lowering the tax rate would increase the government's income. The irony of this is that while medical experimentation on humans is forbidden except under strict controls, there are no such safeguards governing economic experimentation. Implementing the Laffer economic theory resulted in an immediate decrease of government income, with a consequent record leap in budget deficits. Here is an example of a government basing its actions on a theory unsupported by material facts or serious mathematical

analysis. In cases of this nature, faith in an economic theory cannot be distinguished from religious faith. While killing for the sake of beliefs is officially frowned upon in the United States, there are plenty of people willing to jail anybody who acts on the belief that a one-month-old fetus is not a human person. A physician who believes that the life of a severely deformed infant should not be indefinitely and artificially prolonged can end up in serious trouble if he or she acts accordingly. Wars of philosophical belief are waged daily in the media and on picket lines. Unfettered by standards of scholarship, undaunted by peer review, every citizen

feels entitled to express his or her view on the most profound subject. Advocacy groups pressure legislators to dictate what brand of science should be taught in public schools, what form of prayer should be encouraged, and how life may be legally terminated. We have become a democracy of philosophers. These disputes of pop philosophy are carried out with a freeform logic whose favorite rhetorical weapons are a series of questions beginning with the lethal prefix, "How do you know . . . ?" In abortion debates the refrain is: How do you know a fetus is not a human being?

How do you know a fetus doesn't feel anything? How do you know when life begins? In environmental conflicts: How do you know smoking causes cancer? How do you know radiation from a microwave oven or a computer terminal (or a digital watch) will not harm the user? How do you know that radioactive wastes can be stored safely for a thousand years? How do you know TMI won't blow up

as soon as it's rebuilt? In arguments over evolution and creationism: How do you know the earth was created (or not created) 10,000 years ago? How do you know that the laws of nature which existed thousands of years ago are the same as the laws that hold today? How do you know that a creature as marvelously complex as a human being could have been created without a guiding plan from a higher power? In discussions of science fiction, UFOs, parapsychology, and the like: How do you know we can't find a way to travel faster than light?

How do you know antigravity is impossible? How do you know that the laws of nature we believe true today won't be found false in the future? How do you know there is no life after death? How do you know ESP (telepathy, etc.) is impossible? How do you know that paranormal phenomena, such as teleportation, telekinesis, and poltergeists, are impossible? How do you know parapsychology is a pseudoscience?

Each question is hurled as a challenge, with the implication that there is no way of knowing the answer. It follows, then, that the theory backed by the questioner is at least as good as his opponent's. In fact, the insinuation behind the question is that if the opponent cannot answer the question with perfect certainty, then the theory he supports must be all wrong. The ultimate weapon in this form of logic is the claim that all theories are equal the final democratization of philosophy. If theory A cannot be proved with complete certainty, then it is no better than theory B. Never mind that theory B cannot be proved at all. This is the philosophy behind the legal claims of the creationists:

since the proof of evolution is not completely without holes and loose ends, then the competing theory of creationism must be just as good and therefore must be taught on an equal footing in the schools (even though there is no validation at all for creationism). In the face of such logic we might wonder whether we can know anything for sure. There is nothing new about this problem. The Greeks also wrestled with it, giving us the word epistemologythe study of how we know what we know. Assuming, of course, that we know something. Otherwise, none of this makes sense and we may as well give up thinking altogether.

A better way to formulate the question is: How much of what we think we know represents something real in nature, and how much is fantasy, opinion, hypothesis, or sheer delusion? And finally we ask: How does knowledge get into our heads? Does it enter only through our senses, or are there other, more direct ways of knowing? These are serious questions. A good part of epistemology has been taken over by modern science particularly physics, psychology, and neurophysiology. Through physics we learn the nature of the world around us. Psychology gives us insight into the way we put elementary observations together to make complex thoughts. It is the study of the world within the mind. A cautionary

science, it warns us that not every sensory perception is valid. The psyche is prone to accidents ranging from misperceptions, fantasies, illusions, delusions, all the way to hallucinations. The link between psychology and physics is neurophysiology, which, through detailed study of the nervous system, attempts to show how thoughts are related to specific activities going on in the brain. In spite of the inroads being made by the natural sciences, epistemology is still alive and well, most commonly under the heading of philosophy of science. Whenever scientists try to go from observations to theories, they become philosophers. The interpretation of observations is as important in science as

performing the observations. Getting in the way of interpretations are the knotty questions: What are the assumptions built into a science? What is the relationship between theory and observation? Is it possible to make an observation without using some theory hidden within the observation itself? How do we make general laws that apply to the whole universe when we can observe only a very limited sample of that universe?

When even our best scientists disagree on answers to these questions, it is not surprising that the public remains confused and that anyone can with cheerful abandon parade his beliefs as facts. Yet, in spite of disagreements, some degree of certainty can be assigned to scientific knowledge. Scientists have learned some things during the past few centuries; science does accumulate. The accumulation of knowledge is, in fact, one of the criteria for distinguishing a science from a non-science. While we read from time to time that "establishment" knowledge is unsure and subject to constant change, the fact is that at least some of the things we now know are

known "for sure" and are not going to change. After all, planes do fly, computers compute, and a man-made space vehicle can travel for many years past Jupiter, Saturn, and Uranus, following a calculated path with exquisite precision. When you push the appropriate switch, the electric light always goes on (barring malfunctions of the system, of course). The existence of our technological civilization depends on the certain knowledge of discrete facts and general principles. Without such knowledge you are never sure that the electric light will go on every time you flip the switch. It's fashionable to raise the cry: "How do you know the laws we believe now won't be overturned in the future?" (This plaint

is commonly raised in science-fiction circles whenever a scientist disputes the possibility of cherished fantasies such as faster-than-light travel or ESP.) To prove that one has read a book on philosophy of science, one makes obeisance to Thomas Kuhn's concept of paradigm change. To prove open-mindedness, one implies that all paradigms are subject to change, sooner or later.2 But, in fact, nowhere does Kuhn imply that all theories are subject to change. The fact that a theory went through one revolution in the past doesn't mean that it is automatically subject to further revolution in the future. The first revolution may have established the theory with sufficiently strong evidence to make it a permanently

grounded theory. It required a violent revolution to establish the idea that the planets go around the sun. Does anybody (outside of a tiny band of extremists) believe that this theory is subject to further change? We can go to the planets because we know their precise orbits. Reaching the planets verifies the theory. That's as direct a proof as you can get. Quite simply, one of the jobs of scientists is to search through mankind's collective information bank and to decide which of the bits of information stored there are facts known with a high degree of certainty and which others are in actuality opinions, theories, or conjectures masquerading as "facts" and subject to change in the future. This is not an easy

job. To decide between fact and conjecture requires an enormous fund of detailed knowledge, together with a talent for discerning how knowledge fits itself into patterns. Facts by themselves mean nothing. What separates a fine scientist from a simple data collector is a highly developed skill in pattern recognition. In physics, patterns of knowledge are called laws of nature. More precisely, they are laws written by humans to describe what exists in nature. An understanding of these laws allows us to answer at least some of the questions that begin, "How do you know. . . ." These philosophical problems belong to us all, not just to academics and dwellers

in ivory towers. Attacks on the teaching of evolution in schools have a chilling effect on school boards and textbook publishers. I have felt the weight of criticism for merely using the word "evolution" in a chemistry textbook. If the teaching of science is to be dictated by those who understand neither science nor the logic of scientific discovery, then our entire country will suffer from the ignorance of the next generation. 2. How do you know perpetual motion is impossible? To some people the word "impossible" is offensive. "Anything is possible if you only try hard enough," is their battle cry. The obvious absurdity of this statement

seems to escape a large number of people, for it is uttered often enough to create a fog of optimism permeating the national psyche. Yet I know I will never play the piano like Vladimir Horowitz, no matter how hard I try. The proof is that thousands of piano students try exceedingly hard; none of them play like Horowitz, and most of them are better than me. There are limitations to my nervous system, as well as limitations to the human nervous system in general. There are also limitations to what machines can do. For hundreds of years there have been attempts to build a perpetual motion machine: a machine that would operate indefinitely, doing useful work without burning fuel. The quest for

the perpetual motion machine has rivaled the search for the holy grail, everlasting life, and the transmutation of lead into gold.3 Some perpetual motion seekers of the past spent lives pathetically obsessed with their compulsion to turn fantasy into reality. Others were simply con men, preying on the credulity of their fellow citizens to make a fast buck. A typical example of perpetual motion as a con game was the device created by one J. M. Aldrich late in the 19th century. The history and operation of this device was thoroughly exposed in the Scientific American of July 1, 1899. The device consisted of a wheel with a number of weights attached by levers to its periphery. It was designed so that the

weights on one side of the wheel lay further from the center than the weights on the other side. The imbalance thereby created was supposed to keep the wheel rotating forever, since gravity presumably pulled more strongly on one side of the wheel than the other. Unfortunatelyfor the investors who thought they were going to get rich by harnessing this machine to an electrical power generator what really kept the machine going was a secret spring hidden in the mechanism's wooden base. With the spring unwound, the wheel stubbornly refused to remain in motion for very long. It was very much a non-perpetual motion machine. As a result, Mr. Aldrich spent some time in jail, and those who had

bankrolled him bid adieu to their money. The tragedy of this story is that any good physicist of the time could have told these benighted folk that the machine could not possibly work, and that if they had performed some elementary engineering analysis they might have seen that the apparently unbalanced wheel was in actuality perfectly balanced. Their reasons for not asking advice could have been any one of the following: It did not occur to them that scientists might have a set of rules that allowed them to predict how the machine might or might not work. Or, they thought: What do scientists

know? They've been proved wrong before, so they are probably wrong now. Or, they thought: Anything is possible if you try hard enough. Scientists are no strangers to these problems. Attempts to create perpetual motion machines have been made since the era of Plato and Archimedes. Detailed records of these attempts started appearing in the 15th century. During the following three centuries, the general inability of inventors to build a working perpetual motion machine began to alert the scientific community to the fact that an important folly was being perpetrated. As a result, perpetual motion's perpetual failure became a significant contributor to

the budding science of mechanics during the 16th, 17th, and 18th centuries. Pragmatic scientists such as Simon Stevinus, Galileo Galilei, and Christian Huygens, observing the fact that perpetual motion machines have never been seen to work, jumped to the conclusion that perpetual motion machines cannot possibly work. However, they did not have a fundamental reason for coming to this conclusion. It was an induction from a number of known facts, not a deduction from a more general and more powerful theory. Therefore it suffered from the universal flaw of induction, namely, how can one generalize from a small number of observations to a rule that is true for every possible situation? If you observe, say, 24

perpetual motion attempts that end in failure, how can you be sure that all future attempts will also end in failure? To be sure of this, you must find a general law that is known to be true in all situations. A general law covering the perpetual motion situation began to emerge during the 18th century, when the concept of mechanical energy (kinetic plus potential energy) was created. Theoreticians such as Joseph Louis Lagrange were able to show that under very general conditions (e.g., in the absence of friction), mechanical energy was something whose quantity never increased or decreased (within any closed system). Starting with that observation, scientists could, with some degree of confidence, advance the

principle that no mechanical device can be built that creates energy out of nothing, or that puts out more energy than is put into it. Accordingly, the French Academy of Science decided, in 1775, to cease the drain of resources occasioned by interminable reviews of the many proposals received for perpetual motion machines. The reason was not simply that none of these machines had worked in the past, but that there was now a wellestablished law of nature from which it could be predicted that none would work in the future. Of course that didn't stop the inventors who took up the cry, "How do you know your theory applies to all kinds of energy?"

For energy turned out to be a most plastic and changeable idea, as demonstrated by the rise in the 18th century of an entirely new technology, that of engines creating motion from the marvelous properties of expanding steam. Some thought that these newfangled heat engines could somehow get around the restrictions of a theory that applied only to mechanical systems. And later on came the science of electricity. Who knew what wonders might arise from these developments? And so the inventors persisted. But the physicists also persisted. As we shall see in the next section, one major scientific trend of the 19th century was the growing recognition that each new form of energy discovered was convertible

without loss or gain into all the forms already known. In this way, energy evolved into a powerful general concept, and the principle of conservation of energy became one of the most powerful and fundamental laws of nature, assuring us that in any closed system energy cannot be created or destroyed, regardness of how it may change from one form to another. Even though our belief that energy cannot be created from nothing presently rests on an extraordinarily firm basis, attempts to build machines that put out more energy than is put in are still taking place. An inventor named Joseph Newman is currently besieging the U.S. Patent Office in the courts because it turned down his

application for a patent on an "Energy Generation System having Higher Energy Output than Input."4 Interestingly enough, Newman does make obeisance to the principle of conservation of energy, insisting that his device does not create energy from nothing, but rather only makes available energy that was always stored within it, so that it will eventually run down. Nevertheless, tests performed by the National Bureau of Standards indicate that the Newman machine does not in actuality put out more energy than is put in. Significantly, the Patent Office has never made workability a prerequisite for patentability in general. I have seen many patents for processes which have clearly

never been tried experimentally, and which probably would not have worked if tried. These ideas simply represent the overheated imagination of the inventor as he attempts to cover all bases and lay claim to every variation of the invention that enters his mind. However, in the case of perpetual motion the Patent Office plays a harder game. Right at the outset it lays down the rule: Perpetual motion is impossible, so we will waste no time looking at any applications on the subject. This is mere self-defense; it is the same rule the French Academy of Science adopted in 1775. Fortunately for us all, attempts at a classical perpetual motion device have

become a rarity. Few professional physicists will concern themselves for a moment over any machine claiming to create energy from nothing. The laws of physics appear to have won the battle. However, severing one head of the dragon only brings forth newer and more tenacious heads. As we will see later in this book, there are new obsessions abroadeven within academiathat trip headlong over the same fallacy as did perpetual motion, but in ways more subtle, so that the fallacy goes unnoticed. These new variations of perpetual motion sprout undaunted, their flaws obscured, their abuse of the energy concept hidden by intellectual sleight-of-hand. How can we be so sure that perpetual

motion in all its variations and ramifications may be dismissed as impossible? Haven't philosophers warned us that no knowledge is completely certain? The answer lies within the discoveries of modern physics. While a proper agnostic attitude requires us to admit that nothing is known with absolute certainty, modern methods allow us to compute the probability that a given piece of knowledge is true. One of the aims of this book is to demonstrate that the correctness of certain physical principles has such an overwelmingly high degree of probability that for all practical purposes we may think of this knowledge as certain and true. In making this demonstration, we are

forced to burrow deep into the foundations of physics. As we do so, we discover that within the ebb and flow of intellectual fashion and changing theories, some parts of our increasing store of knowledge resist change and remain steadfast. These are areas of knowledge where the evidence is so precise, compelling, and invariant, that we are forced to the stubborn conclusion that at least some knowledge is both definite and permanent. The law of conservation of energy is knowledge of this kind. It is one of the foundation stones of physics, embedded in our understanding of fundamental particles, elementary interactions, and symmetry principlesabstract concepts that not only define physics, but which

define how we think about all of nature. In order to understand any kind of science it is essential to know what is meant by energy, to understand its role in physical processes, and to have a feeling for the incredible precision with which changes of energy can be detected in physical reactions. Only then can we understand why scientists believe that the amount of energy in a closed system cannot change. We will also see how an understanding of conservation of energy enables us to dispose of many delusions prevalent even todayamong laymen and scientists alike. 3. How do you know you can't make energy out of nothing?

The history of the energy concept demonstrates how hypothesis, theory, and experimentation interact to create new knowledge. Prior to the invention of this concept, fantasies of creating motion by pure mechanism, without the motive power of either sun, wind, or fire, possessed a certain reasonableness based on ignorance of fundamental principles. Early perpetual motion seekers had no reason to know that what they were trying to do was forbidden by nature. I speak purposely of the invention of the energy concept. Energy, as an abstract concept, is truly a human invention, as opposed to the things that really exist in naturewhich, from the point of view of the scientist, are the fundamental particles:

quarks, electrons, photons, gluons, and other miniature entities. All of these interact and combine to form objects that we observe either through our senses or by means of instruments. Without living creatures to observe them, they would simply exist, going through their motions and interacting with each other. They would get along perfectly well without us: After all, the stars in a distant galaxy don't care if we are looking at them. They do what they have to do. By contrast, abstract qualities such as beauty, goodness, momentum, and energy are concepts invented by humans to help make sense out of the behavior of observed things. Once invented and defined, these concepts are treated as

observable properties, as aids to pattern recognition and to the solution of problems. But they have no existence independent of the fundamental particles and of our interpretations of their activities. Energy, for example, is never measured directly. What we actually observe is the curvature in the path of a charged particle in a magnetic field, or the amplitude of an electrical pulse in a wire. Energy, as a physical quantity, is inferred from these observations; it is, in other words, a high-level abstraction. Why, for example, did the German philosopher Gottfried Leibniz (16461716) find it necessary to invent the term vis vivathe mass of a moving object multiplied by the square of its velocity

in order to describe the object's quantity of motion? After all, he already had the concept of velocity to describe how fast the object was moving. Why create yet another abstract concept? Leibniz needed (or wanted) this concept because he had noticed that when two billiard balls (or other elastic objects) collide, the total vis viva of the system is unchanged. Such a "constant of the motion" seemed to be a useful property to describe the state of the system. In general, whenever you have something that remains constant while everything around it moves and changes, it would appear to be a useful and important thing. It certainly aids in the solution of problems in mechanics. Leibniz therefore

chose vis viva to be the quantity that describes the amount of motion in a system, and defined it in such a way that it was just twice the quantity we now call kinetic energy. (We might picture 17thcentury scientists standing around the billiard table .arguing about the puzzling behavior displayed by little ivory balls careening about on the green cloth. The study of billiard-ball collisions was an active topic of 17th century physics, and still finds applications in many areas of atomic and nuclear physics.) Complicating the matter, however, was the prior observation by Rene Descartes (1596-1650) that in a billiard-ball collision it is the momentum of the system that remains unchanged. (The momentum

of an object was defined as the mass multiplied by the velocity, with the velocity in one direction being positive and in the opposite direction negative.) There followed a long dispute between the Cartesians and the Leibnizians, the former claiming that conservation of momentum was fundamental, the latter insisting that conservation of vis viva was the rule. Gradually the confusion was cleared up by Christian Huygens (1629-1695), Jean d'Alembert (1717-1783), Johann Bernoulli (16671748), and Daniel Bernoulli (1700-1782), who showed that when elastic objects bounce against each other, both the total momentum and the total vis viva remain unchanged.5 Both

Leibniz and Descartes were correct, but a lot of blood was spilled proving it. New knowledge, after all, is not easy to come by. It was not until the 19th century that the factor of 1/ 2 was put in front of the formula for vis viva to give the quantity we now know as kinetic energy (1/2mv2). One of the first to recognize the need for that factor was Gaspard de Coriolis (1792-1843), better known for the Coriolis force felt by inhabitants of spinning bodies.6 The reason for making kinetic energy of a moving object half the classical vis viva is so that the kinetic energy will equal the mechanical work done by the force that set the object into motion in the first place. The work, in

turn, is defined as the magnitude of the force multiplied by the distance through which the object moves while the force is applied (in the simple case where the force is in the same direction as the motion). But when we calculate the velocity attained by the object as the force is applied to it over a given distance, we find that the amount of work needed to set the object in motion can be stated entirely in terms of the object's mass (m) and final velocity (v). The calculation shows that force times distance equals 1/ 2 mv2, which is then, by definition, the value of the object's kinetic energy. Let us try to imagine the logical processes going on in physicists' minds as the energy concept gradually emerged:

Observation of moving objects suggests that certain simple regularities exist within complicated systems. After considerable fumbling, certain abstract quantities are definedmass, force, momentum, vis viva; and the like. With the use of these definitions, it is found that equations can be written describing the motion of particles in specific situations: under the influence of known forces, or during collisions with each other. These equations represent general laws of nature. One kind of law, exemplified by Newton's second law of motion, describes how to compute the motion of a system of objects given their initial positions and velocities and given

the magnitude and direction of the forces acting on the objects. Another kind of law is represented by equations that are particularly simple: they affirm that if a system is isolated, then no matter how complex the motions of the objects within the system, certain properties of the system (momentum, energy) remain unchanged. This second class of laws is made up of the conservation laws, laws dealing with conserved quantities: properties of a system whichunder certain specified conditionsstay constant, regardless of the detailed behavior of the system as a whole. The laws of conservation of energy and conservation of momentum thus become part of the foundations of

mechanics, the branch of physics dealing with the motion of objects. Clearly the process was not a simple one; its consummation required the effort of the greatest intellects in civilization for a period of three centuries. The key was the formulation of suitable definitions for fundamental physical properties such as mass, momentum, and energy. The criteria for the usefulness of a definition are hard to state. We can, if we want to, wave our arms and argue that a good definition helps us "understand" how nature works. This is true to some extent, and indeed is the reason for the invention of all abstractions. However, in dealing with the fundamentals of physics, more

precise motivations are needed, for the nature of "understanding" is not well understood. In the case of momentum and energy, two considerations were of major importance: First, momentum and kinetic energy are quantitative concepts they represent quantities that can be measured (at least indirectly) with appropriate instruments and procedures. Accordingly, the core of these definitions consists of descriptions of the appropriate measurement procedures. Definitions of this nature are called operational definitions. Operational definitions are a necessary part of any scientific theory, as they provide the only basis for agreement on

what is being discussed. Second, the definitions of energy and momentum deal with quantities that are conserved during interactions between objects. They are quantities that remain constant while all else is changing. That is why they were perceived to be important. This property of invariance provides an important motivation for defining a number of important concepts in modern physics. Throughout the development of physics, the most fundamental work has been done and is still being doneby those who search among all the variables of nature for things that are absolutely constant. Among such fundamental constants are the

speed of light and the rest-masses and electric charges of the fundamental particles. (Electric charge is another quantity that obeys a conservation law: in any closed system the total electric charge cannot change. This rule puts restrictions on the types of reactions that may take place when particles interact.) While the conservation laws made great simplifications possible in the solution of specific physical problems, in the 18th and 19th centuries some alert workers began to see that the law of conservation of energy appeared to have some loopholes. How, for example, could one account for the fact that no real machine continues to move indefinitely all by itself? The best wheel with the most

precise and well-lubricated bearings insists on slowing down and stopping unless propelled by an engine of some sort. Where does its energy go? The law of conservation of energy seemed to be deeply flawed. While energy was never created spontaneously, it seemed to disappear like water from a leaky barrel. The solution to this problem began to appear late in the 18th century, in the midst of a philosophical movement known as Natur philosophie, which had become influential in Germany under the leadership of Friedrich Wilhelm Joseph von Schelling. Schelling wrote that all the forces of nature arose from the same cause: "magnetic, electrical, chemical, and finally even organic phenomena

would be interwoven into one great association . . . [that] extends over the whole of nature."7 Starting with this belief, it was logical to suspect that the energy concept could be extended beyond the domain of mechanical motion to include electricity, heat, and even the life sciences. Significantly, among Schelling's students were a number of scientists whose later work culminated in such an expanded concept of energy. These scientists began their conceptual revolution with the conmon observation that heat is generally evolved when two objects are rubbed against each other. For some time heat had been visualized as a kind of fluid, called caloric. The rubbing, according to theory, was supposed to release the caloric from the objects being

rubbed together, thus warming them. The chief flaw in the caloric theory was that there appeared to be an inexhaustible amount of caloric fluid available for release as long as the rubbing went on. Where did it all come from? Could an infinite amount of caloric be contained within a finite object? An alternative explanation was offered by Count Rumford (Benjamin Thompson, 1753-1814), an American adventurer exiled as a result of his support of the King of England during the revolution of 1776. In 1798, while engaged in the manufacture of munitions for King Ludwig of Bavaria, Rumford observed that cannon barrels became intensely hot while they were being bored, and made a suggestion

that changed the focus of thinking about heat. Summarily dispensing with the caloric fluid as an unnecessary concept, Rumford proposed that the heat caused by drilling was nothing more than another form of energy: thermal energy, into which the drill's mechanical energy was converted by friction. A triumph of American pragmatism. Experiments performed during the 1840s by James Prescott Joule (1818-1889) verified that whenever mechancial work was done by a drill, or by a paddle wheel spinning in a bucket of water, or by a piston compressing a cylinder of gas, the amount of mechanical energy that "disappeared" during this process equalled the amount of heat created. Thus,

simply by recognizing that heat was a form of energy (thermal energy), physicists were able to save the law of conservation of energy. They could show that the total amount of energy in the system mechanical plus thermalremained unchanged while work was being done. The secret to saving conservation of energy lay in the definition of thermal energy. First of all, the quantity of thermal energy had to be defined by prescribing how to measure it. The difference between temperature and quantity of heat had been recognized by Joseph Black as far back as 1760.8 Temperature was something you measured by the increase in length of a mercury column in a glass tube. Quantity of heat, on the other hand, was measured

by the increase in temperature of a known mass of water. From this definition came the British Thermal Unit (the Btu)the amount of heat needed to raise the temperature of a pound of water by one degree Fahrenheit. Mechanical energy, on the other hand, was measured by an entirely different process. A foot-pound of energy was defined as the work done by a force of one pound pushing something through a distance of one foot. What Joule showed by his experiments was that whenever mechanical energy was converted into heat, it always took about 772 foot-pounds of energy to warm one pound of water by one degree (say from 55 to 56 degrees F).

The essential feature of Joule's discovery was this: One Btu of thermal energy was always created from the same amount of mechanical energy, no matter what the source of mechanical energy whether it came from friction or from the compression of a gas. This constancy meant that the mechanical equivalent of heat (772 foot-pounds per Btu) was not an accidental feature of the particular way of measuring heat or mechanical energy. Even more striking was the fact that the same conversion factor emerged when the heat was generated by passing electric current through a wire. Here was something new: the transformation of electrical energy into thermal energy. How could one measure

the quantity of energy delivered by an electric current? It was very simple. Joule noticed that when a current-carrying wire was immersed in water, the water became warmer. The amount of thermal energy gained by the water was measured in the usual way: by weighing the water and using a thermometer to find the change in temperature of the water. Joule found, in addition, that the thermal energy delivered by the wire to the water in a given amount of time depended on only two factors: the electrical resistance of the wire and the square of the amount of current passing through the wire. But detailed analysis of the electrical circuit showed that this quantity of energy was the same as the amount of mechanical work done by the source of electric current to force the

electric charges through the resistance. In this way the relationship between electrical energy, thermal energy, and mechanical energy was established. Heating by electric current passing through a resistance is still called Joule heating. The essential point of our story is this: When it was found that thermal energy was generated by an electric current passing through a wire, it was not assumed that this energy was created from nothing. Rather, in order to preserve conservation of energy, a new form of energy was defined: electrical energy. Furthermore, the definition of electrical energy was based on its equivalence to the

mechanical work done in forcing the electric current through the circuit. Thus defined, a given amount of electrical energy was always found (experimentally) to convert to the same amount of thermal energy. This intellectual process was repeated many times during the following century. Every time a known form of energy was seen to mysteriously appear or disappear, definition of a new kind of energy would, at the last minute, uphold the inviolability of the law of conservation of energy. For example, during the second half of the 19th century, it became necessary to explain how a hot object could cool down (lose thermal energy) even though it was

completely isolated, so that there could be no conduction or convection by material media. Orthe other side of the coin how could bright sunlight transmit radiant heat through a vacuum 93,000,000 miles across so that bare skin was warmed on Earth? Here again a new concept was invented: that of electromagnetic radiation. Through the theoretical work of James Clerk Maxwell (18311879), physicists were able to understand that visible light and radiant thermal energy (infrared light) are nothing more than oscillations propagating through the electromagnetic fields that occupy empty space. Such waves have the ability to convey energy with the speed of light from the most distant stars through a

space almost completely devoid of matter. Once more conservation of energy was saved. Given a suitable definition of radiant energy (in terms of electromagnetic field strength), we can make precise measurements which show that the thermal energy lost by the hot body exactly equals the energy carried off by the radiation. Like the heroine of a cinematic adventuremelodrama, conservation of energy has been saved repeatedly from destruction by last- minute rescues. Every time the law appears to be violated, we simply define a new form of energy that brings nature back into conformity with the law. Indeed, it would almost appear as if conservation of

energy had been defined into existence. How then can we claim that it is an experimentally proved law? The experimental part of the law is based on these facts: First of all, every time we find a system in nature where energy appears to disappear (or to appear from nowhere), we are able to identify the existence of a new natural phenomenon quantitatively equivalent to a known form of energy. We have seen, for example, that mechanical energy lost through friction always reappears in the form of heat. Investigation of that loss showed that a quantity of heat (the Btu) could be defined

in such a way that it was equivalent to a specific number of mechanical energy units (foot-pounds). The convertibility of a unit of thermal energy into a known number of units of mechanical work was possible only if units of heat had the same dimensionality as units of work. Similarly, the unit of electromagnetic energy, defined in terms of electric and magnetic field strength, was shown to be equivalent to the unit of mechanical work. In general, all energy units, regardless of type, are dimensionally equivalent to units of mechanical work. Second, the macroscopic phenomenon being studied is found to be reducible to the actions of entities at a lower (microscopic) level.

In the case of heat, it was found (late in the 19th century) that what had been defined as thermal energy was actually a high-level abstraction for a more basic reality. The reality was that atoms and molecules moved about on a microscopic level within macroscopic objects, and that the thermal energy of these objects was nothing more than the total kinetic energy of their atoms and molecules. Once this was realized, it was clear that thermal energy was not really fundamentally different from classical mechanical energy. Thermal energy only appeared to be a different kind of energy because the motion of individual atomic particles could not be observed; however, it was inevitable that thermal energy could be measured with the same units as those

used for mechanical energy. Similarly, electromagnetic energy could be explained in terms of a fundamental particle called the photon, not discovered until early in the 20th century. However, since the photon is an entity basically different from an atom, electromagnetic energy is not the same as mechanical energy. Nevertheless, it can be transformed into mechanical energy because matter contains charged particles that interact with the photon according to specific rules which have the property of conserving energy. Once measurement procedures for mechanical energy, thermal energy, and electromagnetic energy are described,

when we observe the ways in which these forms of energy are transformed into each other we find that a given quantity of mechanical energy always transforms into the same amount of thermal or electromagnetic energy. The transformation of one form of energy into another does not depend on the method used, andmost importantlyit takes place the same way every time. This consistency or invariance of energy transformations is the fundamental experimental content of the conservation law. Once a joule of mechanical energy and a joule of electromagnetic energy are operationally defined, every time we measure them in the future we will find that they are still equal.

The proliferation of forms of energy leads to a new and serious question: How do we know that some new kind of reaction will not be found in the futurea reaction that allows energy to be created or destroyed, and for which no other new form of energy can be found to save the conservation law? After all, we have not experimented on all possible reactions or systems, and it would be physically impossible to do so. How do we know that in some strange part of the universe (in a black hole, in the center of a galaxy, or even within our own minds) there are not natural laws which have not yet been discovered, and which allow violation of conservation of energy? Such doubts have threatened conservation

of energy on at least three occasions during this century. One occasion was the discovery of radioactivity, which demonstrated that energy could be emitted from apparently inert matter. Was the energy being created upon emission of the radiation? Soon it was realized that the radiating matter was not quite as inert as had been imagined, and was simply emitting energy that had been stored in it long ago. It did not take long for another mystery to arise in connection with the emission of one particular kind of radiation (beta particles) from certain radioactive substances. Measurements of the energy carried away by beta particles showed that a finite amount of energy was being

lost. Where did it go? It required many years to demonstrate that the apparently lost energy was being carried off by an invisible and elusive particle: the neutrino. The lost energy had been found. The third doubt involved the steady-state theory of the expanding universe proposed by Thomas Gold and Fred Hoyle in 1948, according to which the universe, instead of being created in one big bang billions of years ago, was being created continuously and so had no beginning or end. This theory required the formation of matter and energy in small quantities throughout all of space in order that the density of matter be kept constant as the universe expands. However, there has been no verification of this theory, and no

observation of the creation of matter in space. On the contrary, all the evidence supports the big bang theory. Therefore, in spite of all threats, conservation of energy has remained inviolate. Furthermore, and most significant, while the first half of this century found the number of manifestations of energy increasing apparently without end, as a new century approaches the tendency is to reduce the energy concept into a single global phenomenon. Rather than a proliferation of numerous "forms of energy," all forms are now merging into one. Physics, while becoming mathematically more complex, becomes conceptually simpler, in that it requires fewer elementary ideas to explain the

universe. In the following chapters we shall look at some of the consequences of this new trend. In particular, it will become apparent that the new model of matter greatly strengthens the universal nature of the law of conservation of energy. Notes 1. J. Bronowski, The Ascent of Man (Boston: Little Brown, 1973), p. 375. 2. T. S. Kuhn, The Structure of Scientific Revolution (Chicago: University of Chicago Press, 1962). 3. For a complete history of the perpetual motion idea, see A. W. J. G. Ord- Hume,

Perpetual Motion (New York: St. Martin's Press, 1977). 4. R. J. Smith, "An Endless Siege of Implausible Inventions," Science, 226 (Nov. 16, 1984), p. 817. 5. A. Wolf, A History of Science, Technology, and Philosophy in the 18th Century, vol. 1 (New York: Harper, 1961), p. 62. 6. G. G. de Coriolis, Cakul de l'effet de machines, ou considerations sur l'emploi des moteurs et sur leur evaluation (Paris: 1892). 7. F. W. von Schelling, Von der Weltseele (Hamburg: 1798). Also, P. Edwards, ed.,

The Encyclopedia of Philosophy (New York: Macmillan and Free Press, 1967). 8. R. Taton, The Beginnings of Modern Science (New York: Basic Books, 1964).

2. MODELS OF REALITY: PARTICLES 1. The Atomic Model The repeated resurrection of conservation of energy through the discovery of new kinds of energy leads to a new source of worry: What if there is no end to the different kinds of energy that might exist? Physics would then be intolerably complicated. Every new perpetual motion machine would have to be investigated in detail to rule out the possibility of some hitherto unknown force creating energy behind our backs or sneaking it in through invisible interstices in space. Under such conditions there could be no appeal to a simple and universal law to deny the possibility of a machine's creating energy

out of nothing. Fortunately, however, we may breathe a sigh of relief, for recent work suggests that a simple structure hides beneath the complex surface of natural phenomena, which would mean that there are very few fundamentally different kinds of energy in the universe. Some physicists would even like to believe that all the different forms of energy are just various aspects of one basic energy. Proof of that conjecture lies not too far in the future. In the meantime, we have good reason to believe that all forms of energy can be classified into the following categories: Rest-mass energyan intrinsic energy

proportional to the mass of an object at rest. Kinetic energyenergy associated with the motion of an object. In relativistic mechanics, both re t-mass and kinetic energy are combined in the object's total mass; the energy associated with this total mass is given by the familiar formula E = mc2, where m is the mass and c is the speed of light. Potential energyenergy associated with the four fundamental forces or interactions: 1. gravitation

2. electromagnetism 3. strong nuclear force 4. weak nuclear force (The relationships between the concepts of energy, force, and interaction will be detailed in Section 3.1) All the miscellaneous varieties of energy named in classical physics books mechanical energy, acoustic energy, thermal energy, radiant energy, chemical energy, electrical energy, Gibbs free energy, and so on, as well as numerous subsidiary forces, such as the van der Waals force, Bernoulli force, Coriolis force, adhesion, cohesion, and surface

tensioncan be reduced to manifestations of the four fundamental interactions. It is for this reason that it does not take an infinite number of experiments to verify the law of conservation of energy in all conceivable aspects. As a result it is not necessary to look in detail at every new perpetual motion proposal. The possibility of describing the workings of the entire universe in terms of just a few kinds of energy is an intrinsic part of the world view that underlies all of modern physics: the atomic model. This model describes all matter as composed of a few thousand kinds of atoms (including isotopes), comprising roughly 100 elements. Each of these atoms is made

up of a smaller number of elementary particles. All the possible ways in which these particles may interact to form compounds, crystals, and galaxies, to generate chemical reactions as well as the multifarious activities of living matter all these complex goings-on result from the operation of the four fundamental forces listed above. This atomic (or particle) model of nature represents a way of thinking that did not exist prior to the 19th century except in the isolated speculations of Democritus, Newton, and a few others. The atomic model did not begin to approach its present form until the first half of the 20th century, after a century of difficult gestation. The model is still

undergoing development and has not yet reached its final form. However, enough is known to answer many fundamental questions. I do not intend to make this chapter a textbook on particle physics, but I would like to describe how the concept of "particle" evolved and how the concept of "fundamental interaction" permits answers to the questions with which this book is concerned. The struggle for a true understanding of matter began in 1803. In that year, the chemist John Dalton (1766-1844) proposed that all matter is composed of atoms and molecules, basing his proposal on measurements of the volumes and masses of various compounds taking part in chemical reactions. After a lengthy

period of infighting over the clarification of some technical details (most notably the fact that a molecule of hydrogen contains two atoms instead of the one claimed by Dalton), chemical theory emerged from great confusion into a more rational era. The acceptance of the atomic hypothesis by chemists was a difficult process requiring over half a centuryfrom 1803 to 1860to accomplish. (1860 was the year of the First International Chemical Congress in Karlsruhe, Germany, at which Stanislao Cannizzaro presented a lecture showing how Avogadro's hypothesis of 1811 could create order out of chaos by allowing chemists to make sense out of formulas and equations. In particular, the problem of the number of atoms in a hydrogen molecule was solved.)

Some physicists, more stubborn than chemists, did not fully accept the atomic hypothesis until the very end of the 19th century. Their reluctance was partly a reaction to the excesses of physical theories of the 18th and early 19th centuries, which typically appealed to invisible and essentially mystical entities to explain observed phenomena. Phlogiston, for example, had been used to explain combustion, while caloric purportedly explained heat. Similarly, the ether had provided a universal explanation for everything not understood about light and other forms of radiation. In a pendulum swing away from such easy and unprovable explanations, many physicists adopted an extremely hardheaded and skeptical point of view.

Scientists such as Ernst Mach (18381916) and Friedrich Ostwald (18531932) contended that science should concern itself only with objects and events that are directly observable, and should eliminate from its domain theories about invisible and undetectable objects. This was the early positivist position. Even though the kinetic theory of gases (the idea that gases consist of small molecules in rapid motion) provided a neat way of explaining their Behavior namely, the variation in volume under different temperatures and pressures there was great resistance to treating the concept of invisible, minute particles as more than a "convenient hypothesis." Even though gases behaved as though they

consisted of molecules, the early positivists denied that the molecular hypothesis was anything more than a creation of the mind. (The notion that science does nothing more than construct a hypothetical model of what is out there in nature, based on the evidences of our senses, was advanced both by the physicist Ernst Mach and the statistician and philosopher Karl Pearson, under the name of "sensationalism."9 However, as though on signal, physics exploded at the advent of the 20th century with a horde of technological advances that brought microscopic reality close to human perception. Influential in the conversion of molecules from a convenient hypothesis to a reality as solid

as the chair you sit on was the obscure and mysterious phenomenon first noticed by Robert Brown (1773-1858) as far back as 1827, and explained by no one until 1905. The phenomenon was the jittery motion called Brownian motionof small dust or pollen particles suspended in water, which anyone with a good microscope can observe. The year 1905 was the miraculous year in which Albert Einstein (1879-1955) published four revolutionary papers, each of Nobel Prize stature. In one of these papers Einstein described a mathematical theory explaining Brownian motion on the basis of a model according to which the jittering of suspended particles was caused by the pushing and shoving of tiny,

invisible molecules, like a giant ocean liner moved by many small tugboats. One result of Einstein's work was an equation that described the manner in which the number of particles in a given volume varies with their vertical position in the water (in the same way that the density of the atmosphere varies with altitude). Shortly thereafter, in 1908, Jean Perrin (1870-1942) began experiments to study the behavior of microscopic particles suspended in fluids. Patiently counting through a microscope tiny particles of gum resin suspended in water, Perrin not only verified Einstein's predictions, but used his observations to calculate the dimensions of the molecules responsible for the buoyancy of the resin globules.

After Perrin completed his measurements it became difficult to be skeptical about the physical existence of molecules. After all, when the size of an object has been measured, then the inadequacy of calling it simply a "convenient hypothesis" becomes apparent. With Einstein's theory and Perrin's experiments the fantasy of molecules had been realizedturned into reality. Though each individual molecule was invisible to the naked eye, the molecules en masse interacted with larger objects to produce visible effects. The ability of microscopic, invisible things to produce effects visible on a macroscopic, human level has become the paradigm for all our particle-detection procedures. Even before Einstein's Brownian motion

paper, physicists had begun to realize that nature was much more complicated than they had previously suspected. The last decade of the 19th century saw the discovery of many new types of radiation: cathode rays, X-rays, and, most baffling of all, the invisible radiation emitted by uranium through a mysterious process called radioactivity. In order to explain these newly-found rays, scientists were forced to visualize a level of existence beneath that of the atom, and thereby opened up most of the major fields of 20th-century physics. In 1897, so-called cathode rays were shown by J. J. Thomson (1856-1940) to be streams of electrically charged "corpuscles," each with the same mass and quantity of charge.

In the years that followed, H. A. Lorentz (1853-1928) and others began to call these objects electrons. With this development, the science of fundamental particles was begun. X-rays remained an enigma until 1912, when Max von Laue (1879-1960) demonstrated that they were simply transverse electromagnetic waves, differing from waves of visible light only in their shorter wavelength. The clincher was von Laue's direct measurement of the X-ray wavelength, using diffraction by a crystal of zinc sulfide. If you can measure a wavelength, there must be a wave (or at least something that behaves like a wave). The fact that X-rays simultaneously exhibited properties reserved for particles

was an embarrassment that eventually necessitated the development of quantum theory, which will be discussed in Section 2.3. As for the radiation issuing from the uranium atom, arduous work during the first decades of the 20th century by Ernest Rutherford (1871-1937), Marie Curie (1867-1934), and others showed this emission to consist of a threefold mixture: (1) a stream of helium nuclei (alpha particles), (2) a stream of fast-moving electrons identical to the cathode rays (beta particles), and (3) gamma rays, which were identical to X-rays, except that they issued from within uranium atoms rather than from man-made machines.

Over a period of two decades, a short time in the history of science, the physicists' concept of matter took on great depth and detail. Not only were molecules composed of atoms, but atoms themselves were complex structures of smaller particles, with electrons of airy lightness whirling about massive nuclei. Far from being inert, the nucleus was a dynamic furnace of seething energy, periodically exploding with emissions of radiation. Both nucleus and orbital electrons were so tiny that there was no hope of ever observing them visually, for they were far smaller than the wavelengths of visible light, and to image such an object with visible light would be like trying to snare a flea with a fish net; the mesh would be too big.

What were scientists to think of those electrons found in cathode rays? How should they fancy those alpha, beta, and gamma rays emitted from the cores of radioactive atoms? Were they to be considered convenient but ultimately fictitious ways of describing invisible forms of energy, just as the caloric fluid had provided a handy but deceptive way of visualizing heat? Not at all. Improvements in instrumentation at the turn of the century had made it possible to measure precisely the masses and electric charges of cathode ray electrons and of beta particles, which in turn made it possible to demonstrate that electrons and beta particles were one and the same. If you can measure an

object's mass and electric charge, then by definition it has real existence, even though it may not be visible (either to the naked or to the aided eye). In other words, an electron is anything whose mass is found to be 9.109534 x 1031 kilograms and whose electric charge is negative in sign, with a magnitude of 1.6021892 x 10-19 coulombs. The particle is defined in terms of the totality of its measurable properties. Visual appearance, such properties as shape, color, odor, hardness, etc., are not only irrelevant, but meaningless. Even "size" is a property of ambiguous meaning. The development of instrumentation (especially radiation and particle

detectors) displaced the human eye as the arbiter of reality. Instrumentation extended the limits bounding the kinds of objects permitted into the realm of scientific discussion. Although the early positivistic philosophers continued to caution, "Don't talk about anything you can't see," the physicists went blithely ahead with their experiments, discovering hordes of invisible particles. The philosophers' tune then changed to: "Don't talk about anything you can't measure by means of a specific set of operations"a principle called operationalism.2 To a large extent this is still good advice particularly in the physical sciences. The wonderful cloud chamber (invented in

1911 by C. T. R. Wilson), as well as its various offspring, such as the bubble chamber and the spark chamber, added another dimension of reality to scientists' thinking. Anyone who has ever seen fragile trails of white vapor spurt through the inner space of a cloud chamber cannot doubt that the particles causing those trails are real objectsbe they electrons, protons, or alpha particles. Even though the particles themselves are invisible to the naked eye, their effect on the environment is so dramatic and immediate that their reality forcibly impresses itself on any observer's consciousness. Instrumentation transforms fundamental particles from abstract concepts into real things. We need no longer depend on the deceptive naked eye.

The particle model, which was made possible by instrumentation, ushered in a paradigm change equal in importance to the Galilean revolution a total change in scientists' ways of thinking. For with this model the meaning of the word "explanation" underwent a change. In the past, scientific explanations had suffered from an excess of vague argumentation and appeals to imaginary entities such as phlogiston, ether, and caloric. By contrast, the particle model encouraged explanations in terms of measurable entities interacting according to known rules. It is also instructive to note that the atomic model originated as a fantasy, an imaginative thought in the mind of John

Dalton, a genuine visionary. Many of Dalton's notions concerning atomic structure were the purest fantasy, but the difference between his fantasies and those of a simple crank was that many of Dalton's enabled one to make specific predictions about the behavior of matter, predictions which could then be experimentally verified. Fantasy, measurement, prediction, verification, and falsification are the central weapons in the scientist's methodical arsenal. The merits of each will become clear as we proceed. 2. The Particle Model A particle is a thing that can be detected at

a particular point in space, and that has certain measurable properties such as mass, electric charge, and angular momentum (or spin). What do we mean by detecting a particle? Consider a simple example: A stream of alpha particles is directed at a photographic film. One of the alpha particles encounters a silver iodide crystal in the emulsion and displaces some of the crystal's electrons from their normal states. When the film is later developed, silver atoms separate from the iodine atoms in that crystal, forming a black spot. This black spot is evidence that the alpha particle has been detected. The location of the spot indicates where the particle came to rest. It is not necessary that a human

being see that spot. The permanent change in the state of the photographic emulsion is a sufficient condition for the detection of the alpha particle, in the technical sense of the word. Since there are those who say (or at least imply) that human observation is necessary for the detection of a particle,3 let us clarify this point with another example. This time, let us imagine a fastmoving electron entering a geiger counter. This energetic particle knocks orbital electrons out of some of the atoms in the gas within the counter. The released electrons initiate an electrical discharge a sparkin the geiger counter. This spark sends an electrical pulse through wires into a counting circuit. The number in a

digital memory (or on a digital display) becomes increased by one. Each time a fast electron passes through the geiger tube and initiates a discharge, the digital memory registers that event. Thus the circuit counts the number of electrons that have been detected. Being detected, in this case, means starting an electrical discharge that ultimately changes the number in a digital register. That number can go into a computer to be a quantity in a computation. Or, the computer can arrange to ring a bell if the electrons come faster than a predetermined rate. Or, mechanisms can be activated to control a roomful of machinery in such a way that the intensity of the electron beam remains at a constant

level or varies according to a prearranged pattern. All this can happen without a human being observing what goes on. The only thing required for detection of the electrons is a permanent change in the environmenta change in the state of the surrounding system. As David Bohm, author of an influential text on quantum theory, has put it, "Although every observation must be carried out by means of an interaction, the mere fact of interaction is not, by itself, sufficient to make possible a significant observation. The further requirement is that, after interaction has taken place, the state of the apparatus A must be correlated to the state of the system S in a reproducible and reliable way." In

addition, "The various possible configurations or states of the measuring apparatus, corresponding to the different possible results of a measurement, can therefore correctly be regarded as existing completely separately from and independently of all human observers. "4 Much of the confusion over whether or not human observers are required to detect something that exists comes from a habitual tendency of physicists to speak in metaphors. John Wheeler was once fond of saying, "No elementary phenomenon is a real phenomenon until it is an observed phenomenon." Whereupon his students and readers assumed he meant "observed by humans." He now spends his time denying this interpretation, insisting that he merely

meant "detection" rather than "observation." Notice that, in the examples given above, a particle is detected when it knocks electrons out of their normal states in the matter through which it passes. This is the most common means of particle detection. In a cloud chamber, a traveling electron, proton, or alpha particle collides with atoms of the chamber gas, knocking orbital electrons out of them and leaving a trail of positive ions and negative electrons. Water droplets condense on these charges, forming a long, thin cloud. Another type of detection device is the bubble chamber, which contains a liquid (usually hydrogen) close to its boiling point. Passing particles leave trails of charged ions which

stimulate vaporization of the liquid, resulting in tracks of tiny bubbles. To knock electrons out of the atoms in a detector, a particle must be able to interact with the electrons through the electromagnetic force. A neutron (with no electric charge) will not, by itself, trigger a geiger counter or leave a track in a cloud chamber, but it can be detected indirectly: a fast neutron coming into a bubble chamber may collide with one of the hydrogen atoms in the liquid, causing the hydrogen nucleus to recoil like a billiard ball. The hydrogen nucleus then becomes a fast proton which shoots through the chamber and leaves a vapor trail behind it.

In 1888, Heinrich Hertz found that ordinary light shining on a metal surface would stimulate a flow of electric current. Later, this photoelectric effect was explained by assuming the light caused the ejection of electrons from the metal. Although light contains no electric charge, it does consist of electromagnetic fields which interact with electrons. Now, if a diffuse beam of light ejects an electron from a piece of metal, it is as though the energy of the light beam had suddenly been concentrated in the spot from which the electron was ejected. Measurements (first done by Philipp Lenard in 1902) show that the energy of the ejected electrons is linearly related to the frequency of the light wave. However, if the frequency of the light is less than a

certain value, no electrons are emitted from the metal surface at all. If the frequency is high enough so that electrons are emitted, then the number of electrons per second is proportional to the intensity of the light. Classical electromagnetic theory had no explanation for this phenomenon. According to Maxwell's equations, the energy density of a light beam should depend on its intensity, not on its frequency. An explanation, albeit a radical one, surfaced in 1905, when Einstein proposed that every light beam consists of a stream of tiny quanta of energy, whose energy is proportional to the frequency of the light, while the light's intensity depends on the number of quanta. When a

quantum of light is absorbed by an electron within a metal surface, it disappears, giving all its energy to the electron that absorbs it. If that energy is sufficiently great, the electron can jump right out of the metal, and its kinetic energy is equal to the quantum energy minus the amount of energy needed to separate the electron from the metal. In modern terminology, a quantum of light is called a photon. A photon is considered a particle just like all other particles. Yet the manner in which it was discovered emphasizes a fundamental paradox that characterizes all particles. Ordinarily, a beam of light is thought of as a spread-out thingan electromagnetic wave with a measurable wavelength and frequency.

Yet when it is detected by means of the photoelectric effect, it is found in a sharply defined location like any point particle. Furthermore, this particle has a measurable mass, energy, momentum, and angular momentum. The same light beam, in other words, possesses two mutually incompatible sets of properties: sometimes it acts like a wave, and sometimes it acts like a beam of particles. The type of property measured (wave or particle) depends on the kind of experiment by which the measurement is performed; furthermore, no experiment ever shows the light beam to be both at the same time. (More about this in Section 2.3.) This paradox is not confined to photons

and light waves. When electrons are reflected from metal surfaces and then detected on photographic film, they form interference patterns (alternating bands of light and dark) on the developed film. Interference patterns are by definition evidence of waves. Analysis of these patterns shows us how to deduce the wavelength, which turns out to be inversely proportional to the electron momentum (mass times velocity). Thus an object intuitively thought to be a particle is nonetheless associated with some sort of wave. Not surprisingly, therefore, intuition is of no use in a situation like this. We became used to thinking of electrons as particles only because their wavelengths are

generally very small and their wave properties consequently more difficult to observe than the particle properties. Conversely, visible light traditionally appeared to us as waves because photons of visible light possess relatively small energies so that light's wave properties are easier to observe than its particle properties. In the case of X-rays (which are the same as visible light but with a shorter wavelength), the particle properties are easier to observe because of the higher energy of X-ray photons. Xray photons can actually be observed bouncing off electrons like billiard balls (the Compton effect). Our "intuitive" expectations concerning the wave or particle nature of things are therefore simply habits conditioned by the ease of

making a particular kind of observation. The question then arises: What are we to call these hybrid wave- particle entities? In the past they have been called wave packets, and elementary texts on quantum mechanics devote many chapters to calculating their properties, showing pictures of them colliding with obstacles of various sorts. The wave packet concept helps us reconcile the image of a spreadout wave with the picture of a particle localized in space. According to this concept, the wave packets are spread out in space, but (in Max Born's interpretation) the probability of detecting the associated particle at any point within the packet is proportional to the square of the wave amplitude at that particular

point. This interpretation allows us to compute what to expect when atomic events take place. We should not, however, be deceived by a name. Putting the label "wave packet" on an object is merely a convenience: it helps us visualize something that is inherently impossible to visualize. It gives us a comforting but deceptive image, since it does not help us visualize, for example, how an electron can have spin and can transfer angular momentum during interactions. It is therefore necessary to beware of trying to use the wave packet model too literally. It is a mathematical abstraction that enables us to calculate consequences

of physical events. The accompanying visual image is not necessary at all. Particles are, in fact, much more complex than the elementary books would have us believe, as we shall see in greater detail in Chapter 3. In the meantime, we remember that a particle is the sum of its measurable properties, which are measured not by human perceptions, but by specific procedures, all of which rely on instrumentation. However, the human definition of the property being measured is the starting point for all measurements. The definition, in turn, depends to some degree on the theoretical concepts within the mind of the definer.5

This interplay between definitions and experiment is illustrated by the measurement of mass. While Isaac Newton felt satisfied to define mass as the amount of matter in an object, modern physicists find this definition unsatisfactory, for it does not tell how to measure the amount of matter. What does "amount" mean, in this context? Currently, an object's mass is defined to be the quantity of inertia associated with it; it is the mass found in Newton's second law of motion: force = mass x acceleration If two objects are acted on by the same force, then the acceleration of each object will be inversely proportional to its mass.

If the mass of one of these objects is known (e.g., a standard kilogram mass), then the mass of the other can be found by comparing the distances traveled by the two objects in a given time. Mechanical procedures of this kind are useful only for reasonably large objects. Atomic particles require more indirect methods. If the particles are charged, we make use of the fact that a charge moving through a magnetic field experiences a force in a direction perpendicular to both the field and the motion. A moving charged particle therefore traverses a circular path if the magnetic field is constant over the area of motion. Since the radius of curvature of this path is directly proportional to the mass of the particle, it

is easy to compare the masses of two charged particles traveling with the same speed through the same magnetic field. In all cases, the mass of an unknown object is found by comparison with a known, or standard mass. If the particle carries no electric charge, the method must be even more indirect. In the case of the neutron, we make use of the fact that a billiard-ball collision between a fast neutron and a hydrogen nucleus produces a fast proton. Appropriate measurements allow us to deduce the neutron mass. The high-energy photons in an X-ray beam will also engage in billiard-ball collisions with the electrons in a target. The velocity

of the ejected electrons can be measured, and from this data the photon mass can be calculated. However, as a practical matter it is usually the photon energy that is measured, from which the mass can be deduced using the Einstein equation: E = mc2 A word must be said about the concept of photon mass. It is common practice among particle physicists to speak of the photon as a massless particle. Yet every real photon has a mass proportional to its energy, according to the above equation. We explain this discrepancy by saying that the photon really has zero rest-mass, and that what we measure is the dynamic mass the mass of the photon traveling at the

speed of light. The trouble is that there is no such thing as a photon at restall photons travel at the speed of light. The term "rest-mass" would therefore appear to be inapplicable to photons. However, the meaning of the photon's socalled zero rest-mass may be understood by comparing the photon's energy with that of a particle such as an electron. An electron's total energy (E) consists of two parts: its intrinsic energy (E0) proportional to its mass when it is at rest (m0), and its kinetic energy (Ek). The total energy of an electron is the sum of its intrinsic energy and its kinetic energy: E= E0+ Ek Using the relationship between mass and energy, the equation becomes:

mc2 = m0c2 + Ek In the case of the photon, the terms 4 and m0 are zero, so that the equations become: E= Ek and mc2 = Ek That is, the intrinsic energy of the photon is zero, so that all of its energy is kinetic, and all of its mass is associated with the kinetic energy. This kinetic energy, in turn, is associated with the electromagnetic field that accompanies the photon in its travels and whose wave equation describes the photon mathematically. It is through this oscillating electromagnetic field that photons interact with electrons (e.g., in the

photoelectric effect or the Compton effect). The wave equation that describes an electron, on the other hand, is of a totally different kind. It is the equation of a "matter wave," the Dirac equation. (The Schroedinger equation is an analogous, more elementary equation that leaves out the property of spin.) Because the electron has a finite rest-mass while the photon does not, and because the electron and photon are described by two different kinds of equations, we classify them as different types of particles. In the next section we will survey the elementary particles and summarize their classification. 3. The Standard Model

The vast majority of physical scientists base their work on a model consisting of a few simple principles: 1. All matter is composed of a relatively few types of fundamental particles. 2. These particles interact with each other in a few specific ways. 3. Everything that takes place in the universe results from interactions between fundamental particles. The above statements describe the philosophical position known as reductionism. A number of scientists have difficulty with this position. They point out, for example, that it is not possible to

predict the behavior of even the simplest living organism starting with the laws governing the interactions of fundamental particles. Furthermore, bulk matter possesses properties such as temperature, entropy, life, and intelligence, which are not possessed by single particles, so that physics, chemistry, and biology require higher-level laws to describe the behavior of matter on a macroscopic scale. Discussion of these questions will be postponed until Chapter 6. The model of fundamental particles developed during the 20th century is now called the "standard model." One of the problems encountered during this model's evolution has been the difficulty of deciding when a particle really is

fundamentalthat is, when it cannot be broken down into even smaller particles. Early in the century the model was simple: everything was composed of atoms, and atoms were composed of electrons and nuclei. Within each nucleus were protons and neutrons. Electrons, protons, and neutrons were the fundamental particles of matter. In addition, the photon was the fundamental particle of electromagnetic energy. Starting in the 1930s, as physicists continued to investigate radioactivity, cosmic rays, and the results of high-energy nuclear collisions, a wide variety of other particles were discovered. The first of these, the mesons, were so named because their masses were midway between the

electron and proton masses. In time, as energies of particle accelerators were extended, scores of new particles were found with masses greater than the proton mass, and were accordingly called hyperons. Unlike the electron and proton, mesons and hyperons were unstable, decaying into lighter particles a very short time after being created in particle collisions. Their proliferation raised the suspicion that they were not fundamental at all, but rather composed of yet simpler objects. The 1960s saw the gradual development and acceptance of a new theory. This theory proposed that a new set of fundamental particles, called quarks, were the building blocks of all the heavier

particles: protons, neutrons, mesons, and the multitude of hyperons. The quarks are now an essential component of the standard model. Quarks exhibit two anomalies which differentiate them from all other particles: 1. The first has to do with electrical charge. All other particles carry electric charges that are integer multiples of the charge on the electron (with either a plus or minus sign), which was previously considered to be the fundamental unit of electric charge. The proton although much different from the electron possesses precisely the same amount of charge as the electron, but with the opposite sign. Were it not for this

equality, every atom would possess a net electric charge and would either attract or repel other atoms electrostatically. All of chemistry and physics would then be different from what we now observe. (The equality of the electron and proton charge is one of the most precisely measured facts in physics; the charge on the electron equals the charge on the proton to within one part in 1020.6) Quarks, on the other hand, possess electric charges that are fractional relative to the unit charge of the electron. Of the six types, or "flavors," of quarks, three (up, charm, and top) have charges of +2/ 3 units. The other three (down, strange, and bottom) have charges of -1 / 3 unit.

2. The second anomaly concerns the fact that quarks' fractional charges are never observed in nature. The reason is that quarks, in contrast to the other particles, are never found alone. They always exist in close combination with one another, in such a way that the total combined charge is an integral number of unit charges. For example, a combination of two up quarks with one down quark forms a particle with a charge of one unit: 2/ 3 unit + 2/ 3 unit - 1/ 3 unit = 1 unit This particle turns out to be a proton. On the other hand, a combination of one up quark with two down quarks has zero charge:

2/ 3 unit - 1/ 3 unit - 1/ 3 unit = 0 unit This uncharged particle is the neutron. It is embarrassing to be faced with a set of unobserved and unobservable particles. The situation is uncomfortably reminiscent of past theories involving unobservable fluids: ether, phlogiston, caloric. In the century preceding quark theory, physicists had with great effort taught themselves not to countenance unnecessary and unobservable entities. Then they were suddenly faced with a set of unobservable but undeniably useful constructs: quarks. Were they real, or did they represent no more than a convenient hypothesis? For a period of several years many physicists struggled to believe in the reality of

quarks while at the same time hesitating to abandon their agnostic position. The discovery of new particles predicted by quark theory settled the question. (Notable was the psi meson, discovered in November, 1974.) To the relief of many, quark theory did predict observable events, and so qualified as a valid theory. The question concerning the invisibility of individual quarks turned out to be answerable by investigating the nature of the force that ties them together. This force is quite different from the electromagnetic force that binds electrons to the protons inside an atom. When an electron is separated from a proton, the farther apart they get, the weaker their attraction becomes. Thus an electron can be

separated from an atom by using a finite amount of energy. When, on the other hand, two quarks are moved farther apart, attraction between them becomes stronger and stronger. (The same thing happens when you pull on two ends of a spring). The result is that an infinite amount of energy is required to separate two quarks completely. Before the quarks become entirely separated, however, the energy expended in pulling them apart becomes great enough to create still other quark pairs, so that instead of two isolated quarks you end up with two or more quark pairs. This explanation, by itself, would be no more than a convenient self-deception were it not for the fact that quark theory

does predict the existence as well as the masses and other properties of scores of observed hyperons. Otherwise we would be in the position of the man who, when asked why he goes around snapping his fingers, replied that it was to keep tigers away. "That's silly," says his friend. "There aren't any tigers here." "You see," says the first man, continuing to snap his fingers. "It works. That's why you don't see any tigers." A theory, to be tenable, must provide positive predictions as well as negative ones. Within the standard model, the fundamental particles are classified according to the scheme outlined in Table

1. The two major categories of particles are the fermions and the bosons: Fermionsparticles of matter with a spin of 1 / 2 unit. (Named after Enrico Fermi, who first investigated their statistical properties.) There are two classes of fermions: leptons (light particles) and quarks. While six quarks are shown in the table, each of the six comes in three "colors" in order to account for the hundreds of mesons and baryons that have been found to exist. (The word "color" must not be taken literally; it simply refers to an abstract property, analogous to electrical charge.) Of the leptons, the electron, muon, and tau particle are primary. The three neutrinos

take part in various reactions and each is associated with its own primary particle. For example, in beta decay, a proton changes into a neutron, an electron, and an electron neutrino. Bosonsparticles whose spin is an integral number of units. (Named for Satyendra Bose, who investigated their statistical properties.) The bosons in the table are quanta of energy that mediate the forces between particles of matter. Just as the electromagnetic force between charged particles is, in quantum electrodynamics, pictured as arising from the interchange of photons between the charges, so the standard model pictures the weak interactionwhich is responsible for beta decay and other

reactionsas arising from the exchange of W+ , W- and Z particles. Similarly, the strong force binding quarks together (as well as neutrons and protons within atomic nuclei) is a result of the exchange of gluons. (These interactions will be discussed again in Chapter 4.) Table 1

Not all of the particles listed in the table have actually been observed. The graviton and the Higgs particles are necessary parts of the standard theory; that is, they are required for mathematical consistency. The graviton is analogous to the photon. Exchange of gravitons between masses is responsible for the gravitational force. However, the weakness of the gravitational force has so far precluded experimental verification of the graviton. The Higgs particles are required for a consistent theory of interaction between leptons and quarks. The theory does not predict the mass of the Higgs particles, so it has been difficult to plan experiments to observe them.

Clearly, then, the standard model described in this section has not been thoroughly verified in all its details. Furthermore, nobody can say with certainty that the quark and electron, which the standard model treats as fundamental particles, cannot be divided into smaller components. Indeed, a new theorysuperstring theoryis under development which states that the fundamental entity is a tiny string-like thing about 10-35 meters long whose various modes of vibration constitute the particles we now recognize. The next few decades are sure to be filled with important and fundamental advances in particle theory. Therefore we must not consider the standard model to be a

complete theory. However, for events taking place in matter at the atomic and molecular level events which affect the functions of mind and bodyonly four particles are of interest: the electron, the proton, the neutron, and the photon. All the others are of little or no importance to normal, everyday life. They were of importance during the first microsecond of the universe, when temperatures were enormously greater than they are now, and they are important in astrophysical processes (e.g., generation of solar energy, production of cosmic rays). But as far as chemistry, biology, and neurophysiology are concerned, we know, with a very high degree of confidence, what the essential particles are and how they behave.

In future chapters we will ask whether living beings display any properties or behaviors that can be predicted as precisely as we now predict the behavior of electrons and protons. Before this question can be answered, we must look more closely at these particles. We must see how they operate on the quantum level and try to understand what makes them behave as they do. Notes 1. W. C. Dampier, A History of Science (New York: Macmillan, 1946), p. 317. 2. P. W. Bridgman, The Nature of Physical Theory (Princeton, N.J.: Princeton University Press, 1936), Chap.

2. 3. J. Gribbin, In Search of Schroedinger's Cat (New York: Bantam Books, 1984): ". . . It is meaningless to ask what the atoms are doing when we are not looking at them" (p. 120). "First, we have to accept that the very act of observing a thing changes it, and that we, the observers, are in a very real sense part of the experiment. . . . " (p. 160). "Nothing is real unless we look at it, and it ceases to be real as soon as we stop looking" (p. 173). 4. D. Bohm, Quantum Theory (New York: Prentice-Hall, 1951), pp. 584-585. 5. M. A. Rothman, Discovering the Natural Laws (New York: Doubleday,

1972), Chap. 2. 6. Ibid., Chap. 8.

3. MODELS OF REALITY: QUANTUM THEORY 1. The Quantum Model A historical experimentthe two-slit diffraction experiment illustrates the oxymoronic nature of fundamental particles: their wave- particle duality. Light from a point source is sent through a pair of narrow slits and directed onto a photographic film (Figure 1). After appropriate exposure and development, a series of alternate light and dark stripes is found on the film. These stripes (interference fringes) are evidence of the wave associated with the light, and can be explained using the equations of electromagnetic waves. Notice that the

light from one slit travels a bit farther than the light from the other slit in order to reach a given point on the film. If the path difference is half a wavelength, then the waves from the two slits will meet 180 out of phase and therefore cancel each other, making the light intensity at that point zero. If the path difference is a full wavelength, then the wave amplitudes add and the light intensity is at a maximum, causing a dark stripe to be formed on the film. Looking at these dark stripes with a microscope we find that they consist of thousands of tiny black silver grains. Each of these grains is the result of electrons in the emulsion being ejected from their normal energy states by absorbed photons of light.

Significantly, if the intensity of the light is reduced to a very low value, so low that only one photon at a time passes through the slit-pair, the same interference pattern is formed. The only difference is that the pattern takes longer to form. That it forms at all, however, means that some kind of wave passes simultaneously through both slits, so that each photon must be spread out in space. However, when a photon forms a silver grain in the emulsion, it would seem to be concentrated at one small point. Its detection, in other words, is a particle manifestation, whereas the existence of an interference pattern seems to indicate that the photon is a wave-like entity extending over the volume of the entire apparatus.

FIGURE 1. (a) Without interference effects, light passed through a pair of slits would produce two dark stripes on the photographic film. (b) Because of interference effects, light passed through a pair of slits actually produces a series of dark stripes. (c) The interference occurs because the light behaves as though there is a source at each slit, emitting light in the form of circular waves. In certain directions, wave crests from one source meet crests from the other source, adding the amplitudes of the two waves (constructive interference). This interference causes the dark stripes to form on the film. The two-slit experiment is a classic and obligatory feature of all textbooks on

atomic physics and quantum mechanics. Another experiment, first performed by L. Janossy in Budapest during the 1950s, makes the same point in a more forceful and dramatic manner.1,2 The experiment is designed to illuminate a question asked by Albert Einstein in 1927: What happens when a single photon encounters a halfsilvered mirror? Curiously, no book that I have seen contains a description of this experiment, perhaps because the mystery it demonstrates is too impenetrable. Textbook authors, in general, do not like to discuss inexplicable matters. In spite of the mystery, however, this experiment can be performed in any reasonably wellequipped college laboratory. I have done the experiment myself and can attest to its fascination.

The Janossy experiment makes use of the rectangular arrangement of four mirrors shown in Figure 2. The light source is a mercury vapor lamp with a filter that eliminates all wavelengths except those from a single line of the mercury spectrum. (A laser is not used for the light source because it is necessary that the photons be uncorrelated.) The beamsplitter close to the light source is a "halfsilvered" mirror; its silver coating is so thin that half the light goes straight through it, while half is reflected. The result is a pair of light beams perpendicular to each other, each with half the intensity of the original beam. These two beams are reflected by the mirrors at the upper right and lower left corners, and recombine at the half- silvered mirror in the lower right

hand corner. The combined light beam is focused onto a screen, where interference fringes (alternate light and dark bands) are seen when the mirrors are adjusted correctly.The apparatus described above is a Mach-Zehnder interferometer,commonly used to detect small optical changes in the paths of the two light beams. The interference fringes arise for the same reason that they appear in the two-slit experiment: transmitted and reflected light beams travel different distances to reach a given spot on the screen, resulting in either constructive or destructive interference. This result is precisely what would be expected from classical wave theory. So sensitive is the apparatus to changes in the light paths that pressing the table with a finger will cause

a visible shift in the fringe positions (even if the table is made of steel plate).

FIGURE 2. A Mach-Zehnder interferometer, consisting of a light source, a beamsplitter (or half-silvered mirror), two front-surfaced mirrors, and a final half-silvered mirror to combine the two beams. The combined beam produces interference fringes when viewed on a screen or photographic film. The next step in the experiment is to put a filter in front of the light source to reduce its intensity. Intensity can be monitored by placing in the path of one of the light beams a photomultiplier tube that is capable of detecting single photons. The output of the photomultiplier is a series of electrical pulses that can be counted by a

fast electronic counter. When the counting rate is about one million photons per second, you can be assured that only one photon at a time is passing through the interferometer, since it takes about one hundred-millionth of a second for a photon to travel from the light source to the screen, a short interval compared with the time between photons. Do we still get interference fringes when only one photon at a time passes through the interferometer? The results of the twoslit experiment lead us to believe that we should. Indeed, if we replace the screen with a photographic film and expose it to the light beam for at least an hour, we find, when the film is developed, that interference fringes are in fact present.

This observation is precisely what we expect from quantum theory. Each photon, upon encountering the first half-silvered mirror, is split into a transmitted wave and a reflected wave (Figure 3). These two wave packets travel around the two legs of the interferometer, and are recombined when they reach the second half-silvered mirror. When the combined waves reach the film, intensity maxima are formed at points where they are in phase, while minima are formed at points where they are out of phase. The probability of the photon being detected (forming a silver grain) at a given point is proportional to the wave intensity. Therefore where the intensity is a maximum, the number of silver grains generated will be a maximum. Thus, the

interference pattern seen is perfectly in accordance with the standard interpretation, which considers the wave packet to be a "probability packet."

FIGURE 3. A wave packet, after it encounters a half-silvered mirror, is divided into a reflected wave and a transmitted wave. The interferometer also demonstrates the workings of wave- particle duality. While the photon is traversing the apparatus, our knowledge of its location is vague; it is behaving like a wave. We know that it must be spread out along both legs of the interferometer, because if you interrupt either the transmitted or the reflected beam, you lose the interference fringes. In order to have interference there must be something going around both paths, but we cannot say that the photon is in one path or the other. It is not until the photon is detected that it becomes localized. In

other words, only in the process of being detected does the photon exhibit the character of a particle. If we do not think too carefully about the meaning of the reflected and transmitted waves, this experiment does not arouse a great sense of mystery. However, let us ask: What happens if we try to detect the reflected and transmitted wave packets separately? Do we find two photons? Asking this question plunges us immediately into the deepest mystery of science. We attempt to answer the question by removing three of the interferometer mirrors and placing a pair of photomultipliers adjacent to the beam-

splitter as shown in Figure 4. Photomultiplier A detects the reflected part of the wave packet and photomultiplier B detects the transmitted part of the wave packet. If we count the output pulses from each photomultiplier, we simply find photons being detected at random. If the light source is sending a million photons per second into the beamsplitter, each photomultiplier counts about half a million photons per second. Nothing new is learned. (The actual number of counts is less, since the photomultiplier is less than 100% efficient.) Now let us connect the output of the two detectors to a coincidence circuit. A coincidence circuit is an electronic device that gives an output signal if it receives a pulse from detector A and from detector B at the same time.

The number of counts emerging from the coincidence circuit is therefore the number of "coincidences" produced by the two counters. In actuality, the pulse generated by each detector has a certain duration in time (about one microsecond in my equipment), so that coincidences will be produced whenever a pulse from A overlaps a pulse from B within that "resolving time." Inevitably, there will always be a certain number of random coincidences during a period of observation, caused by two unrelated photons entering the apparatus by chance during the resolving time. It is a simple matter to calculate this number if the counting rate in each detector is known.

FIGURE 4. After passing through a beam-splitter, a photon may be detected in either photomultiplier A or B, but not in both simultaneously. This observation

is verified by the use of a coincidence circuit, which gives a signal when both detectors trigger at the same time. No coincidences are found over and above what might be expected from the random simultaneous entry of two photons into the apparatus. For example, if each detector counts 5,000 photons per second, and the resolving time is one microsecond, then the chance coincidence rate expected is about 50 counts per second. And indeed, when the experiment of Figure 4 is performed, this is the figure we obtain. What does this mean? It means, simply, that no photons are counted by both detectors at the same time, over and above

what would be detected simply by chance. That is, ignoring the random coincidences, if detector A is triggered by a photon, then detector B does not trigger. If detector B is triggered by a photon, then detector A does not trigger. In other words, detection of a photon is an either- or proposition: A given photon is detected either in A or in B; never in both at the same time. There is a curious aspect to this observation. We have already shown that a single wave packet is split into a transmitted and a reflected wave by the half-silvered mirror. Both transmitted and reflected waves are identical. These identical packets impinge on identical detectors. How, then, does detector A know that it must not trigger if detector B

does trigger, or vice versa? (I do not intend to impute human characteristics to photo detectors; I use the term "know" here only in a metaphorical sense.) The traditional explanation for this is that the transmitted and reflected waves are not two separate wave packets, but rather two halves of a single packet.The argument against considering the two halfpackets to be independent is based on the fact that the energy of a wave packet is inversely proportional to its wavelength. The wavelength does not change upon reflection or transmission (otherwise the color of the light would change). If we assumed the reflected and transmitted waves to be separate packets, each would have the same energy as the original

packet. But it is not possible for each of the two halves to carry the same energy as the original photon, for this would mean that the energy suddenly doubles when the photon encounters a half-silvered mirror. Therefore the transmitted and reflected parts of the wave must compose a single wave packet. The probability of the photon being detected at a given point is proportional to the intensity of the wave at that point, which means that the transmitted wave carries half the probability of being detected, while the reflected wave carries the other half. Now ordinarily, when two independent events, each with a probability of 50%, occur at random, the probability of both occurring at once is 25%. (For example, if two

coins are tossed together, a pair of heads will come up about 25 times in 100 throws.) But in the present experiment this does not happen. Therefore the probability of the photon being detected at A cannot be independent of its detection at B; in fact, the either-or nature of the photon detection process requires that the probability of both detectors triggering simultaneously be zero. Our conclusion must be that the photon can only be detected at one point within the wave packet; therefore only one detector can respond to it, and in so doing receives all of its energy. This conclusion seems to be not unreasonable; as far as it goes, it agrees

completely with the results of quantum theory. However, let us now enlarge the interferometer until each leg is at least 10 meters long. (I have not done this myself, so it should be considered a thoughtexperiment. However, the results that follow agree both with quantum theory and with the more sophisticated experiments described in the next section.) With path lengths this great, the two halves of the wave packet will be entirely separated, since the wave packet is expected to be only a few meters long. This separation makes the interferometer experiment qualitatively different from the two-slit experiment, where the wave packet is never really split into two disjoint parts.

The result of the experiment is the same, regardless of the interferometer size. If detector A triggers, detector B does not trigger, even though both detectors are hit by identical wave packet portions. This effect is a correlationa relationship between the events recorded by the two detectors. And this correlation is caused by a split wave packet whose two parts have no physical connection. How can this happen? One classical explanation for such occurrences, widely accepted for many years, was the concept of the collapsing wave packet, introduced by Heisenberg during the 1930s. According to this model, the wave packet instantly collapses when

the photon is detected. Therefore, when the photon is detected at A, the collapse of the wave packet prevents it from being detected at B. This explanation, however, leads to some difficulties, as illustrated by the following thought-experiment: Imagine that the apparatus is rearranged so that detector B is closer to the beam-splitter than detector A (Figure 5). The wave packet will now reach B before it reaches A. (A time delay is inserted into the line between B and the coincidence circuit to compensate for the greater travel time between the beam-splitter and detector A.) What is the result of this new arrangement? Simply this: the results of the experiment are unchanged. A photon is still detected either at A or at B, but never at both at the same time.

FIGURE 5. The result of the coincidence experiment is the same as in Figure 4 even when the distance to one of the detectors is increased. Think about the meaning of the result. When the wave packet reaches B, the response of B must depend on what is going to happen to A. Detector B must not trigger if the photon is destined to be detected at A. But the detection at A has not happened yet. Thus detector B's action is determined by something that has not yet happened, which is a violation of causality. Imagine the experiment done with an apparatus so large that it takes a split wave packet one second to go from the mirror to detector A but only one microsecond to reach B. If the rule is that

the first detector hit by a wave packet responds immediately to the stimulus, then detector B would trigger every time, and detector A would never trigger. This does not happen. That means detector B must somehow "choose" whether to trigger or not to trigger. When does it make that choice? Does it know immediately that detector A is going to trigger one second from now, or does it wait a second until A does trigger? We know that B does not wait, because when B does trigger, its pulses are always emitted immediately upon arrival of the photon. To put it another way, when detector A triggers, it must inform detector B, 300,000 kilometers away and one second in the past, that it is not to respond to the

oncoming wave packet. Either way, time travel or precognition seems to be involved, neither one of which is appetizing to those who believe the cause should always come before the effect. There is a temptation to take an easy way out of the dilemma, which would be to assume that the photon possesses an inner structure that determines where it is going to be detected. If this were true, it would imply that the two halves of the split photon differ in some way. Indeed, the physicist Louis de Broglie at one time proposed that the energy of a photon is localized within a small region of the wave packet. Acccording to this view, if the half-photon reaching detector A is the part of the wave packet with the energy,

that energy is deposited at A, making A the detector that triggers. But that leaves the other half of the photon without energy. What could be the meaning of such an empty photon? Leaving such questions aside, the concept of localized energy does seem to explain the interferometer experiment. The results of the experiment are as if one of the photon halves carried away from the beam-splitter information determining which of the two detectors that photon was going to trigger. Unfortunately, this kind of theoryknown as a "local hidden variable theory"is refuted by experiments of a more refined nature. In recent years a number of such

experiments have graphically demonstrated that the nature of quantum mechanics cannot be explained by assuming that invisible processes take place inside wave packets. The internal nature of the wave packet is forever beyond our scrutiny. In the next section we will survey some of the newer experiments and consider some of their puzzling consequences. 2. The Bell Theorem Some of the deepest problems in philosophy have their roots in quantum theory; indeed, modern philosophy of science is to a great extent the philosophy of quantum theory. This is because quantum theory, in its role as the

fundamental theory of matter and energy, makes a number of statements which contradict our "commonsense" notions of nature. Philosophical problems arise when we try to make scientific sense out of these contradictons. Misunderstandings and wishful thinking have given rise to an extensive literature that attributes powers to quantum theory that it does not have. For clarity's sake, then, it will be helpful to review some of the recent discoveries in this field. One example of misunderstanding is the way in which Einstein's famous statement that "God does not throw dice" has been used to bolster all manner of diverse (and sometimes irrelevant) opinions. Mainly it has been used to demonstrate Einstein's

disapproval of quantum theory especially its inability to predict exactly what will happen as a consequence of specific atomic reactions. Quantum theory can only predict the probability that a given result will take place just as a dice thrower can only predict the probability of getting a seven when he makes a throw. Einstein's problem was not that he was unable to understand quantum theory. He understood it only too well. After all, he was one of its founders. His 1905 theory of the photoelectric effect explained the relationship between the energy and frequency of a photon and thereby established the connection between the wave and particle aspects of light. In

1924, while studying the statistics of gases, Einstein suggested that molecules also possess properties of waves, and that diffraction phenomena would be produced by a molecular beam passing through minute apertures.3 Thus he paralleled Louis de Broglie's invention of the concept of matter waves. Einstein's approval of a thesis in which de Broglie originated the idea of wave/ particle duality led to the acceptance of that idea by the physics community. At the same time, his ongoing debate with Niels Bohr clarified the conceptual and theoretical foundations of quantum mechanics. A more complete quote gives a clearer idea of what was in Einstein's mind: "It seems hard to look in God's cards. But I

cannot for a moment believe that He plays dice and makes use of 'telepathic' means (as the current quantum theory alleges He does)."4 It seems clear that Einstein was referring to experiments such as the splitphoton experiment described in the previous section. He would not have liked the way detector A seems to tell detector B what to do regardless of the distance between them. What gave Einstein difficulty was his belief in "objective reality" the idea that objects in nature exist independently of us. He did not like the idea that the behavior of an electron depends on how we observe it, and thought that although wave packets give rise to indeterminate properties and probabilistic predictions,

these are only external manifestations of hidden mechanisms existing inside the wave packetthe "hidden variables" mentioned in the previous section. Einstein's final word on the subject came in a famous paper describing a thoughtexperiment illustrating what is now called the "Einstein-Podolsky-Rosen paradox."5 The paper begins with Einstein's concept of reality, expressed in two statements: "Every element of the physical reality must have a counterpart in the physical theory" and "If, without in any way disturbing a system, we can predict with certainty the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity."

The paper then goes on to describe a hypothetical experiment the EPR experimentinvolving a system of two particles (A and B) that interact with each other and then move apart. Due to the Heisenberg uncertainty, it is impossible to measure both the position and the momentum of each particle simultaneously. However, the relative position of the two particles and their total momentum are quantities that can be known precisely. (For example, if the two particles are pushed apart by a central force, then the total momentum is zero.) After the two particles have moved sufficiently far apart, the momentum of particle A is measured. We assume that this measurement cannot disturb particle B, since the two particles are far apart.

Now, if the total momentum is known, measurement of particle A's momentum gives us knowledge of particle B's momentum. This means that the momentum of particle B is a definite "element of reality"a property which it must have had since the two particles separated. If, on the other hand, the position of particle A is measured, then the position of particle B is known, making that a definite element of reality. But, as already stated, it is not possible in quantum mechanics for both the position and momentum of particle B to be elements of reality at the same time. Because the wave function of a particle does not contain enough information to describe its properties completely, Einstein considered quantum mechanics to be incomplete.

An alternative interpretation of the experiment, more in line with conventional quantum theory, says that the two-particle system may be described by a single wave function (the mathematical description of a wave packet). Therefore the two particles are correlated: they influence each other even after they have moved apart for a great distance. That is, the measurement on particle A determines what is going to be a real property of particle B. If you measure particle A's momentum, then particle B has a real and definite momentum. If you measure particle A's position, then particle B has a real and definite position. Such a notion was not conceivable to Einstein, who wrote, in 1948, "That which really exists in B should . . . not depend on what kind

of measurement is carried out in part of space A; it should also be independent of whether or not any measurement is carried out in space A."6 However, as we saw in the last section, the detection of a photon at A depends on whether it has been (or is going to be) detected at B. It appears as though, like it or not, we must contend with a peculiar action at a distance in both space and time. It was on this point that Einstein broke with quantum theory. In 1951 David Bohm described another thought-experiment that demonstrated the same anomalies as the EPR experiment.? Bohm visualized a molecule containing two atoms whose spins are in opposite

directions; the total spin of the system is zero. The two atoms are pushed apart by some force that does not disturb their spins. After they have traveled some distance, the spin of atom A is measured by a suitable apparatus. One of the consequences of the Heisenberg uncertainty principle is that with a single experiment you can measure the component of an atom's spin in only one direction. Measuring that component renders uncertain the components of spin in the other two perpendicular directions of space. Let us say we conduct an experiment that measures the x- component of particle A's spin. We now know, without having disturbed particle B in any way, that the x-

component of its spin has a value opposite to that just measured for A. Assume now that we rotate the measuring apparatus and perform the same measurement on the ycomponent (and then the z-component) of the spin of particle A. Since these measurements do not disturb particle B, and since the total spin of the system is zero, we now have simultaneous knowledge of all three components of the spin of particle B. But quantum theory says that we cannot know all three components of spin simultaneously. Therefore quantum theory is not telling us everything there is to knowthat is, it is incomplete. During the 1950s a number of physicists (most notably David Bohm) tried to

complete quantum theory by adding hidden variables (invisible mechanisms within a wave packet) that would determine the behavior of particles, changing the indeterminism of quantum theory into a more deterministic theory. However, nobody was able to devise an experiment that would decide between standard quantum theory and any of the hidden variable theories. This lack was remedied in 1964 by the English physicist John Stewart Bell. In a famous paper (which, through a concatenation of errors, was not published until 1966), Bell showed how to construct a logically consistent hidden variable theory that would be able to predict the probability of a specific observation.8

In another paper written at the same time, Bell applied his method to the EinsteinPodolsky-Rosen experiment and proved what is now called Bell's theorem,9 which prescribes a way of calculating the results of an EPR-type experiment and shows clear and measurable differences between the data that would be obtained under a hidden variable theory on the one hand and standard quantum theory on the other. Bell's theorem, for the first time in history, provided a prescription for testing the validity of alternate versions of quantum theory. It predicted that an electron carrying precise information about its own spin would give experimental results different from a quantum mechanical

electron whose spin is determined by its interaction with the measuring apparatus. Many authors since Bell have supplied their own derivations of the theorem and come to identical conclusions. A detailed historical summary of these efforts has been compiled by Max Jammer.' Nick Herbert provides an interesting survey of quantum reality theories in non-technical language.11 An insightful description of Bell's theorem and its predictions is found in a brief paper by N. David Mermin.12 While the EPR experiment was originally only a thought-experiment, real experimental data began to appear in 1948. At first the results were ambiguous, due to the poor statistics provided by the

experimental techniques at the time. However, methods gradually improved and the results began to lean uniformly towards one clear conclusion: Quantum theory was correct and hidden variables were out. A climactic and definitive experiment was performed in 1982 by Main Aspect and collaborators at the Institut d'Optique Theorique et Applique, in Orsay, France.13 The property studied in the Aspect experiment was the polarization of a light beam rather than the spin of particles, since polarization can be measured more easily and efficiently. To understand the experiment, we must distinguish between the classical and the

quantum concepts of polarization. Classically, a light beam consisted of electromagnetic waves, with electric and magnetic fields oscillating at right angles to the beam's direction of travel (Figure 6). Ordinary light is unpolarized: it contains a mixture of electric fields oriented in all directions. If a beam of this light is passed through a polarizer (e.g., a sheet of Polaroid or a calcite crystal), it will be converted into polarized light, in which all the electric fields are oriented in a direction determined by the polarizer. If this polarized beam is passed through a second polarizer, the intensity of the emerging beam depends on the orientation of that polarizer relative to the entering beam's direction of polarization. If the angle between the two orientations is

zero, then all of the entering light goes through the polarizer. If the angle is 90, then none of it does. As a rule, the intensity of the transmitted light is proportional to the square of the cosine of the angle between the polarizer orientation and beam orientation. Most interesting is the fact that the light transmitted through the second polarizer has its direction of polarization changed; it is now polarized in the direction of the second polarizer. Using the quantum model, we visualize the beam as a stream of photons. We do not think of a single photon as having an electric field; the oscillating electric field arises when we average the effect of millions of photons interacting with an apparatus designed to measure oscillating

electric fields. However, we can think of each photon as being in a specific state of polarization, connected with a specific direction. If a photon in a state of xpolarization encounters a polarizer aligned in the x direction, it has a 100% probability of going through. However, if it encounters a polarizer aligned in the y direction, it has zero probability of going through. The strangeness of quantum mechanics becomes apparent when the photon meets a polarizer inclined at some intermediate angle. The photon cannot split into two. Either it goes through or it is absorbed; the probability of transmission equals the square of the cosine of the angle between the photon's state of polarization and the orientation of the polarizer. If a polarized light beam

containing photons all in an x-state of polarization encounters a polarizer inclined at 60 to the x-axis, only onequarter of its photons will get through; for this reason the intensity of the light beam is reduced to 25% of its original intensity. Those photons which get through the polarizer emerge in a state of polarization parallel to the orientation of the polarizer.

FIGURE 6. A beam of unpolarized light is passed through a polarizer, and becomes polarized, with its electric field oscillating in one direction. If this polarized beam is passed through a second polarizer, its intensity is reduced, depending on the angle between the direction of polarization and the orientation of the second polarizer. How are we to explain this behavior? Should we think that each photon possesses a specific state of polarization that is switched by the polarizer to another state? Or does each photon contain the potentiality of more than one state, so that even when it has been put into one state by the first polarizer, it has a certain probability of being found in another state

by the second polarizer? This would be an example of the state of a system being partly determined by the mechansim that observes it (the polarizer). Polarization is a fascinating example of an effect that is easily observed, that is purely quantum mechanical, and that contradicts the causality assumptions of classical physics. The polarized photons encounter the second polarizer all in the same state; none of them carries a label saying that it is different from any of the others. Nevertheless, some of them go through the polarizer while the rest are absorbed. The same cause does not always produce the same effect.14 In the Aspect experiment, the source of

photons is the element calcium, whose atoms are pumped to an excited state by a laser beam. Each calcium atom decays in a two-stage cascade, going from the excited state down to its ground state in two rapid, successive steps. During the cascade, two photons are emitted in opposite directions but with identical polarizations (See Figure 7). The light from the whole calcium sample is unpolarized, but we know that any two photons emitted simultaneously must be in the same state of polarization, since they result from the same cascade. As each of these photons passes through a polarizer, the probability of its being transmitted depends on the orientation of the polarizer, as previously described. After transmittal the photons are counted by

photomultiplier detectors. To ensure that the two photomultipliers count photons emitted from the same atom at the same time, the pulses they generate are routed to a coincidence circuit having a resolving time of 18 nanoseconds (18 x 10-9 sec). The coincidence counting rate is a maximum when both polarizers are oriented in the same direction, because the photons being detected are in the same polarization state. The counting rate is zero when the polarizers are oriented perpendicular to each other. The important numbers, however, are the counting rates at intermediate angles. Standard quantum theory predicts one set of numbers, while quantum theory with hidden variables predicts another.

This experiment, like earlier experiments, yields a result consistent with standard quantum theory. This result alone is enough evidence to conclude that the measurement at detector A influences the measurement at detector B, and vice versa. What makes the Aspect experiment different from all previous studies is the manner in which it answers the question, "What happens if the orientation of the polarizer at A is suddenly changed while the photon is in flight? For this action changes the probability of the photon being detected at A. Does the new orientation now influence the probability that the sister photon will be detected at B?"

FIGURE 7. A pair of photons is emitted simultaneously from a calcium light source. They travel in opposite directions; each photon passes through a polarizer and is detected by a photomultiplier tube. The output of the

two photomultipliers is sent to a coincidence circuit.

This feature of the Aspect experiment is shown in Figure 8. Before each photon reaches a polarizer, it passes through an optical switching device (SA and SB). This light switch consists of a glass cell filled with water in which ultrasonic standing waves may be induced. When there are no standing waves, the photons pass straight through to polarizers A and B. When there are standing waves in the cell, the photons are deflected by these waves to another set of polarizers, A' and B'. The arrangement causes photons to be switched from one polarizer to the other

about every 10 nanoseconds. The rate of switching at B is made to be slightly different from that at A, so that no correlation exists between the two detectors. That is, a photon going through polarizer A might have its sister photon passing through either B or B'. Now, since it takes 40 nanoseconds for each photon to travel from the source to the switching mirror, the switching takes place while each photon is en route from source to detector. The coincidence circuit gives counting rates for all of the possible polarizer pairs: A-B, A'-B, A-B', and A'B'.

FIGURE 8. The apparatus of Figure 7 is modified by adding a pair of beam switchers and two more polarizers and

detectors. The beam switchers normally transmit the light to detectors A and B, but may be activated to switch the light to detectors A' and B'. If the switching takes place shortly after a photon has been emitted from the source, the photon will be deflected to one of the alternate detectors while in flight. The result of these measurements and calculations agrees precisely with the predictions of the standard quantum theory. This means that the probability of a photon being detected at B depends on polarizer A's orientation relative to polarizer B. But if the photon is switched so that it goes instead to polarizer A', then the detection probability at B suddenly depends on the relative orientation of

polarizer A'. Similarly, the detection probability at A depends on the relative orientation of either polarizer B or polarizer B'. This implies that we cannot think of a photon as possessing an intrinsic property of being polarized in a given direction. The polarity it exhibits depends on the detection process, which includes the detection of another photon some distance away. One way of understanding the results of this experiment is to think of the two photons and the four polarizers as a single system described by a single wave function. The probability of a pair of photons being detected at a given pair of

detectors depends upon the relative orientation of the corresponding pair of polarizers, regardless of the distance between them. (This is the mysterious action at a distance that Einstein abhorred.) Even though two photons are so far apart that their respective wave packets do not overlap, they behave as though they are correlated, the behavior of one depending on the behavior of the other. What are we to make of this? 3. Quantum theory and reality The brief description of quantum theory in this chapter is designed to emphasize the enormous difficulties one encounters when trying to visualize events that take place within the fundamental particles of matter.

Human language describes macroscopic objects well, but breaks down when we attempt to use it to describe the microscopic. Expressions like "the location of a particle" or "the location of the energy in a particle" or "the polarization of a photon" are quite simply without meaning. A particle may be detected anywhere within a large volume of space; the detection of a photon may depend on the state of another, distant photon; and even the term "wave packet" may not mean what we naively think it to mean when we write about it in textbooks. Clearly, these expressions are metaphors for things that can really only be described mathematically. In spite of difficulties in verbalizing and

understanding, we are still able to draw some definite conclusions from quantum theory and from the experimental results that verify it: 1. The consequences of single events on the atomic level cannot be predicted with certainty. If an electron collides with a proton, there is no way of knowing ahead of time the exact path that electron will take after the collision. If a photon encounters a half-silvered mirror, we cannot predict whether it will be detected on the transmitted or reflected side. Given a milligram of cesium 137 with a half-life of 30 years, it is impossible to know in advance which atom in that sample is going to emit a gamma ray photon at what particular time. The only thing that the

mathematics of quantum theory allows us to predict is the average behavior of whole collections of particles. These averages are the numbers that come out of the equations stipulated by the theory. 2. The observables of everyday life are determined by such averages. When you hit a golf ball, the path of the ball is the average path of billions of atoms. This path can be calculated as precisely as needed for practical purposes; the Heisenberg uncertainty has a minimal effect on the motion of macroscopic objects. 3. Even though the results of individual atomic-level reactions cannot be predicted with certainty, certain highly

valid statements can be made about them. We can state with great confidence that in any reaction between two atoms, nuclei, or fundamental particles, the total energy and momentum of the system will remain constant, since .energy and momentum are conserved quantities. (This is true only to a degree of precision determined by the amount of time available for the observation, about which more will be said in Chapter 5.) Furthermore, because energy and momentum are conserved on the particle level, we can be sure that energy and momentum are also conserved on the macroscopic, humanly observable level, since everything we see on that level is the end result of myriad particle interactions.

4. Even though uncertainty and unpredictability make themselves felt primarily on the microscopic level, there are a number of situations in which they produce consequences that may be directly experienced on the human level. For example, a cosmic ray proton coming from outer space could interact with nuclei in the earth's atmosphere to form a shower of mesons, one of which might trigger a geiger counter. The output of the geiger counter could then trigger other mechanisms: It might generate an electric pulse that is counted in a digital register. It might signal a computer to initiate a calculation. It might set off an audible alarm. Or, one of the mesons might disrupt a DNA molecule in a human reproductive cell, causing a mutation that changes the

next generation. Or,it might damage a microchip element within a computer, causing an error in its operation. None of these events is foreseeable with any degree of certainty. No brain, no matter how powerful, can compute the events that would be caused by a single cosmic ray particle, since the paths of particles produced by cosmic ray collisions high in the atmosphere are intrinsically unpredictable. A single atomic event is able to produce important results in the human world because of amplification mechanisms provided by positive feedback. When an energetic particle ejects an electron from one of the atoms in the geiger counter, that

electron is accelerated by a strong electric field and soon has enough energy to knock more electrons out of atoms in the gas filling the counter. These, in turn, produce more electrons. The resulting cascade amplifies the original signal by a factor of millions, producing a large electrical pulse. This amplification changes the microscopic event into a macroscopic event that is capable of being remembered. Amplification is the crux of the detection process; it is what allows the quantum theorist to separate the object being detected from the detector. 5. We have seen that in the type of situation exemplified by the Aspect experiment, the wave function of a pair of particles seems to be spread out over a

wide region of space, so that what happens to one particle is determined, in part, by what happens to the other particle far away. A number of writers have tried to pretend that this correlation between the two particlesthis mysterious, ghostly action at a distancecan allow us to communicate instantaneously from one place to another. They imagine that by wiggling the polarizer at B, we somehow "send a message" to the detector at A. This prospect is enormously pleasing to writers of science fiction and to others anxious to theorize a mechanism that would rationalize their belief in telepathy and in other forms of extrasensory perception. Unfortunately for such people, however,

nature is not so cooperative. Two fundamental laws apply here. First, the principle of relativity assures us that no information can be transmitted faster than the speed of light. Second, rules of information theory insist that a meaningful signal can be sent from one place to another only in the form of a modulated stream of numerous particles behaving on the average in accordance with the equations of quantum physics. A single particle by itself conveys no useful information, and a random stream of particles conveys no more information than a single particle. To illustrate, let us go back to the Aspect experiment. What does the observer at A see? He sees his detector responding to a

random influx of photons, sending out a series of pulses that are dutifully recorded. The counting rate depends only on the intensity of the light source and the efficiency of the detector. Not even the orientation of his own polarizer affects the counting rate, because the photons are emitted from the source in random states of polarization. Furthermore, nothing that the observer at B does to his polarizer has any effect on the counts that A obtains with his single detector. Recall that in the experiment, all of the interesting results were recorded by the concidence circuit. The coincidence circuit detects correlations between the counts at A and B. In effect, it detects the relative orientation of the polarizers at the

two locations. But this information comes to it only at the speed of the corresponding electrical signals, which is less than the speed of light. No instantaneous transmission of information is available to human perception. What photon B "does" to photon A is invisible to the observer at A. Aware of these considerations, Aspect himself, in a footnote to his paper of 1982, specifically cautions us that "such results cannot be taken as providing the possibility of faster-than-light communication." It would appear that he was becoming impatient with attempts by popularizers of science and by science fiction writers to read more into the strangeness of quantum reality than is

actually there. In the end we are left in an anomalous situation. Quantum theory is highly successful, in that it can explain everything that happens in the atomic world. Numerous experiments have verified its predictions with exquisite precision. Nevertheless, its underlying philosophy is in a state of great confusion. Simple questions inspire intense controversy: What is meant by a wave packet? Does the wave function represent a real object, or simply the average of numerous measurements? What happens when a photon is reflected from a halfsilvered mirror? Great uneasiness surrounds not only the attempt to answer these questions but also the mere act of

asking them. For these questions deal with the nature of reality itself, and, like Einstein, we cannot help but feel uncomfortable in the face of certain of the implications of quantum physics. Although we understand the mathematics of quantum theory and are able to solve specific problems, we simply do not have a way of visualizing the underlying physical reality that the equations represent. At present, several competing theories of reality jostle for acceptance among physicists,'1,15 although most physicists outside the small group interested in foundations get along quite well in their daily work without worrying about metaphysical questions. For in all the pleasant furor surrounding Bell's theorem

and the associated experiments it is easy to forget that they have not provided us with a new theory of quantum mechanics. Indeed, the final result of all such experiments has been a profoundly conservative one, in that they have eliminated alternate theories competing with standard quantum theory and the Bohr interpretation. Thus the foundation of the theory that has provided solutions to problems for the past half-century stands unchanged. What is new is a growing dissatisfaction with previous concepts of reality, a feeling that something is being swept under the rug when we accept not being able to visualize what goes on inside a wave packet. Einstein's unhappiness, in other words, has become contagious.

Notes 1. L. Janossy and Z. Naray, Acta Physica Hungarica, 7 (1957), p. 403. 2. G. Farkas, L. Janossy, Z. Naray, and P. Varga, Acta Physica Hungarica, 18 (1965), p. 199. 3. A. Pais, 'Subtle is the Lord . . .,' (Oxford University Press, 1982), pp. 437438. 4. Ibid., p 440. 5. A. Einstein, B. Podolsky, and N. Rosen, Physical Review, 47 (1935), p. 777. 6. The Born-Einstein Letters (New York: Walker, 1971), quoted in note 12.

7. D. Bohm, Quantum Theory (Englewood Cliffs, N.J.: Prentice-Hall, 1951), p. 614. 8. J. S. Bell, "On the problem of hidden variables in quantum mechanics," Reviews of Modern Physics, 38 (1966) pp. 447-452. 9. J. S. Bell, "On the Einstein Podolsky Rosen paradox," Physics, 1 (1964), pp. 195-200. 10. M. Jammer, The Philosophy of Quantum Mechanics (New York: Wiley, 1974). 11. N. Herbert, Quantum Reality (New York: Anchor Press/ Doubleday, 1985). 12. N. D. Mermin, Physics Today, 38

(April 1985), pp. 38-47. 13. A. Aspect, J. Dalibard, and G. Roger, "Experimental Test of Bell's Inequalities Using Time-Varying Analyzers," Physical Review Letters, 49 (Dec. 20, 1982), pp. 1804-1807. 14. A. P. French and E. F. Taylor, An Introduction to Quantum Physics (New York: W. W. Norton, 1978), Chap. 6. 15. B. S. DeWitt and N. Graham, eds., The Many-Worlds Interpretation of Quantum Mechanics (Princeton, N.J.: Princeton University Press, 1973).

4. INTERACTIONS 1. Nothing happens without an interaction While discussing the properties of fundamental particles and wave packets, we have warily circled the most important topic. We have said nothing about why particles behave the way they do. In nature, nothing observable happens "by itself" or for no reason. The cause of all activity on a macroscopic level can be found in the interactions among atoms and molecules or electrons and photons, which in turn can be explained as the sum of interactions between pairs of particles. Everything that happens in the universe, no matter how complex, is the result of these elementary interactions.

We emphasize that interactions are between pairs of particles. Even though most events involve numerous particles, they can always be analyzed in terms of interactions between pairs. All efforts to detect special interactions requiring three or more particles have failed. The term "interaction" has a special meaning in physics. It is a general term that means "anything that causes objects to change their state of existence." In its simplest form, an interaction appears as a push or a pullthat is, as something that causes a particle to change its state of motion. One obvious interaction of this type is the gravitational force, which causes all

masses to accelerate toward each other, and in particular causes unsupported objects to fall to earth. When gravitation is operating, the acceleration of the affected mass is the phenomenon actually observed. To explain that acceleration, we invent higher-level concepts that assume various forms depending on the theory we use. In classical mechanics, for example, we infer that the acceleration is caused by a force. Indeed, Newton's second law of motion gives us a formal definition of force: when a mass m is seen to accelerate at a rate a, then the quantity of force F equals the mass times the acceleration: F 7 ma. (More precisely, to account for situations where the mass changes, force equals the rate of change of momentum.)

Historically, the concept of "force" originated in the immediacy of the mechanical force you feel when somebody or something pushes you. This concept was later extended by the human imagination to explain the abstract phenomenon of action at a distance. Then, to explain the origin of forces, still other abstractions were invented. In classical field theory, for example, space is visualized as being filled by a gravitational field that interacts with masses. The strength and direction of this field at the position of a given mass determine the magnitude and direction of that mass's acceleration. In turn, the field's strength and direction at that point is determined by the magnitudes and relative positions of the remaining masses in

space. In relativistic mechanics, on the other hand, gravitational acceleration is interpreted as being caused by the curvature of space that is induced by the presence of the interacting masses, and the word "force" is considered superfluous. Finally, in the quantum theory of gravitation, as in all quantum theories of force, the acceleration is interpreted as being caused by the continual interchange of particles (gravitons) between the accelerating masses. Even in classical mechanics, the concept of force is not necessary to explain the motion of objects. In fact, at least ten different ways of formulating the laws of

motion exist in addition to Newton's second law, all of which give the same results and can be shown to be equivalent to each other. In two of these methods, the Lagrangian and the Hamiltonian methods, the primary agent of change is energy rather than force. In setting up the equations to describe the motion of gravitating objects (moons and planets, for example), we first write down a mathematical expression that represents the gravitational potential energy of the system at every point in space. Adding to that expression the kinetic energy of all the objects, we obtain a quantity called the Hamiltonian of the system (which in this case is equal to the total mechanical energy). If we insert this Hamiltonian into standard formulas known as Hamilton's

equations and then grind out the solution, we learn where each of the objects will be at every instant of time. The same sort of thing can be done using the Lagrangian method, in which the Lagrangianthe difference between the kinetic and potential energiesis the significant quantity. Both of these methods depend on our ability to write down an equation for the potential energy of a system at every point in space. The Lagrangian and Hamiltonian methods are the basis for the solution of all problems in quantum mechanics, which is why energy is now considered to be a more fundamental concept than force. Indeed, in quantum theory the concept of "force" disappears entirely. All that

remains is the concept of the interaction between two objects, expressed mathematically as either a Hamiltonian or a Lagrangian, the form of which is determined by the nature of the interaction. An interaction can do more than simply cause things to move. Some interactions cause particles to appear or disappear, or to change into other particles. Of course the interactions must be able to supply the requisite energy; in fact they may be considered a means of transferring energy from one place to another. To a large degree, the study of elementary particles is the study of the types and properties of the elementary interactions. To the relief of all concerned, there are

relatively few different kinds of elementary interactions. This fact has produced a remarkable simplification of physics, and a great strengthening of our ability to predict the kinds of events that can and cannot take place. 2. Types of interactions hydraulic force, pneumatic force, centrifugal force, Coriolis force, surface tension, electrical energy, chemical energy, Gibbs free energy, radiant energy, thermal energy, and so onnot to mention psychic energy. During the first half of the 20th century, this proliferation of entities was pared down to four fundamental interactions recognized by physics as controlling all actions at the atomic level

and, by extension, all events in the universe. These four were the gravitational, electromagnetic, strong nuclear, and weak nuclear interactions. As described in the last section, each interaction may be described either as a force or as an energy, depending on which is more convenient. (If you know the potential energy at all points in space, you can calculate the force acting at any given point, and vice versa.) Now, in the latter half of the 20th century, the weak nuclear and electromagnetic interactions have been identified as two aspects of a single interaction termed the electroweak interaction. Thus the four fundamental forces have been reduced to three. Encouraged by this success, many

physicists believe that ultimately all the interactions will be subsumed under one universal interaction. This is the motivation behind the research into socalled "grand unification theories" (GUTs) and."supersymmetry theories," which currently make up the speculative edge of particle physics. The success of such theories would be of enormous significance to philosophy. If it could be shown that one fundamental interaction is responsible for all phenomena in the universe, then the task of predicting what is possible and what is impossible would be greatly simplified. However, since the evidence verifying a unification theory is not yet in, we will, for the purposes of this book, consider the

four fundamental forcesthe gravitational, electromagnetic, strong nuclear, and weak nuclear interactions to be the causes of everything that happens. (The electromagnetic and weak nuclear interactions are so different in character that we will treat them separately.) What we would like to focus on in this discussion are the forces that play a role in the explanation of commonly perceived events: the operation of machinery, the mechanisms of our living bodies, and the processes in our nervous systems responsible for perceptions, feelings, memory, and consciousness. But in order to determine which of the four basic interactions play an important role in

matters of life, we need to know something about their properties.

1. The gravitational interaction. Two properties of interactions are important in this discussion: (a) their strength, and (b) their dependence on the position and velocity of the interacting particles. (For a comparison of the strengths of the fundamental interactions, see Table 2.) The strength of the gravitational interaction depends not only on the masses of the two objects involved but also on the interaction's intrinsic nature, which determines how strong the force is between a given pair of masses. Although gravitation is the weakest of the four forces, among objects as massive as planets and stars it predominates over the others in determining the motion of the system. Its strength varies inversely as the square of the distance between the two

masses, which makes it a long-range force. That is, the gravitational effect of any mass extends indefinitely through space. The orbit of a galaxy is influenced by the gravitational fields of galaxies millions of light-years away. The Coriolis force (felt by objects located on a rotating body) and inertial forces (such as centrifugal force) are manifestations of the gravitational interaction that depend on a moving mass's velocity and acceleration relative to the rest of the universe. 2. The electromagnetic interaction. Originally, electric and magnetic forces were considered to be two separate forces, but Maxwell's work in the second half of the 19th century showed that they are actually two aspects of a single

electromagnetic force. The electric force manifests itself as either an attraction or a repulsion between two electric charges. It is responsible for sparks, arcs, and electric currents of all kinds. That there are two kinds of forces attraction and repulsionderives from the fact that there are two kinds of electric chargepositive and negative. The reason we are not normally aware of electrical attractions and repulsions (even though all matter consists of electric charges) is that normal (un-ionized) matter consists of equal numbers of positive and negative charges (protons and electrons). One of the fortunate symmetries of nature ensures that the charge on an electron is exactly equal to the charge on a proton, so that attractions and repulsions between

ordinary bits of matter cancel out. In order for the electric force to display its power, some electrons must be separated from the protons they are attached to; that is, the atoms must be ionized. In their rush to get back to their normal states, the electrons then perform all the operations associated with electric currents. The magnetic force arises when interacting electric charges are in motion relative to the observer. In the case of a permanent magnet, such as a bar (or horseshoe) magnet, the magnetic field arises from the spin of electrons within the material. The electric fields of the atomic protons and electrons cancel each other out because, as just described, there are as many positive as negative charges within

the magnet. Because of this cancellation the magnetic field, which would ordinarily be a small addition to the electric field, becomes predominant. When electric charges not only move but also change their velocities (undergo acceleration), electromagnetic waves are generated. These waves travel away from the source with the speed of light and are the means by which energy is conveyed from one place to another. Electromagnetic waves are the only entities known to physics (aside from solid projectiles and assorted particle beams) that can transmit meaningful information from one place to another through empty space. (Sound waves and carrier pigeons do not go through empty space.) Theoretically, such entities as

gravitational waves and modulated neutrino beams could also carry information, but the weakness of their effects makes their use for that purpose extremely implausible. In quantum theory, the electromagnetic interaction is pictured as the end result of an exchange of photons between electric charges. In addition to producing the familiar attractions and repulsions between charges, this interaction accounts for a number of fundamental processes, such as electron-positron pair production, photonuclear reactions, and so on. It is also responsible for the van der Waals force that holds molecules together and for the alignment of ions and electrons into crystalline forms. The very existence of

liquids and solids can therefore be attributed to the electromagnetic interaction. Electronic devices like photocells, diodes, and transistors also owe their properties to the electromagnetic interaction. All chemical reactions, including the electrochemical events that propagate information within the human nervous system, are controlled by the electromagnetic force. The strength of this force (as shown in Table 2) is 1038 times greater than that of the gravitational force. What this means is that if you measure the electrostatic repulsion between a pair of protons, it will be 1038 times stronger than their mutual gravitational attraction. The magnitude of the electric force varies

inversely as the square of the distance between the particles, so that, like the gravitational force, it is a long-range force. Within an atom, the electrical attraction between a proton and an electron is in the neighborhood of 1041 times greater than their gravitational attraction, so that the gravitational interaction may be safely omitted from any calculation of the events that take place inside an atom or molecule. (To get a rough idea of what 1041 means, consider that the mass of the earth is about 1031 times greater than the mass of a grain of sand.) When we deal with large objects like planets and satellites, on the other hand, the electromagnetic force is vanishingly

small in comparison with the gravitational force. This is because fortunately, as already mentioned, the electric forces in normal, neutral matter cancel each other out, so that the motion of the planets can be calculated solely on the basis of the gravitational interaction. In the case of small satellites, it might be necessary to take into account the fact that the satellite may have picked up a few stray electrons or ions, since the resulting net electric charge will interact with the earth's magnetic field. But this is only a small perturbation that can be dealt with as the need arises. On the other hand, if we are interested in investigating the functioning of the nervous system or of other biological systems, the

motion of electrons and ions is of prime importance, so that in this context only the electromagnetic interaction is of real interest. 3. The strong nuclear interaction. This interaction is what joins quarks together to form neutrons, protons, mesons, and other, more massive, particles. It also causes the attraction between neutrons and protons that permits the formation of atomic nuclei. The relative strength of 1040 cited in Table 2 for this interaction refers to a pair of nucleons touching each other, which means that within a nucleus, a pair of adjacent protons are attracted to each other by a nuclear force about 100 times greater than the electric force that pushes them apart. This is why nuclei with more

than one proton are able to exist. The nuclear attraction is stronger than the electrostatic repulsion and so forms the "glue" that holds the entire nucleus together. The strong nuclear force is a short-range force. That is, it is felt only by particles closer to each other than about 10-15 meters. Indeed, the radius of a neutron or proton is defined as the range of the strong nuclear force. Consequently, nuclear particles do not experience the strong force unless they actually touch each other, which means that the strong force plays a role only within the atomic nucleus or in high-energy reactions that take place when one nucleon bombards another in accelerator experiments.

Because the strong force acts only within the nucleus, it has no effect on the orbital electrons, which are responsible for all of chemistry. For this reason, it need not be considered in any discussion of biology, physiology, or psychology, except insofar as it determines the existence and behavior of certain isotopes. 4. The weak nuclear interaction. Another short-range interaction, the weak nuclear force, causes the transformation of a neutron into a proton plus an electron plus an antineutrinothe reaction responsible for the beta decay of many radioactive nuclei. The weak nuclear interaction also causes the decay of a muon into an electron plus a neutrino-antineutrino pair. None of these reactions play a role in

chemical processes and so, as in the case of the strong nuclear reaction, need not be considered in the discussion of life processes. It is clear from the above discussion that of the four fundamental interactions, the only one that can possibly affect processes taking place within the human body is the electromagnetic. All sensory perception takes place through its mediation. Seeing occurs when light, itself a stream of photons, displaces electrons in the retina, causing a flow of electrons and ions through the optic nerve. Hearing involves the transfer of energy from acoustic waves in the air to the organ of Corti in the inner ear. Acoustic waves themselves consist of greater or lesser densities of air

molecules bombarding the eardrum; each molecule transfers energy to the eardrum through an interaction that is ultimately electromagnetic. The mechanical force exerted by any macroscopic object on another may be analyzed into the electromagnetic forces acting between the atoms at the surfaces touching each other, so that feelings of touch and pain may be reduced to electromagnetic interactions. In short, all sensory stimuli, all transfer of information within the nervous system and from one person to another, is mediated by the electromagnetic interaction. Do the concepts of "psychic force" or "psychic energy" have any place in physics? In a word: no. These are pseudoscientific terms invented by 19th-

century psychic researchers to give an aura of scientific validity to their spiritualist claims. Through the influence of Carl Jung, the use of the term "psychic energy" continues into the late 20th century, particularly in the writings of Jungian psychologists and in some parapsychological theories. While the term has certain metaphorical uses, its literal use to denote a form of physical energy capable of transmitting messages from one mind to another has led many into error. There is no physical evidence pointing to the real existence of such an energy. Use of this concept is inevitably bound up with a dualistic theory of psychology: the idea that mind and body are separate entities and that the workings of the mind are somehow connected with

the presence of a mystical psychic energy. We will have more to say about this in later chapters. 3. Rules for interactions The concept of interaction gives physicists the power of prediction. Given a mathematical expression describing an interaction between two objects, a physicist can predict what those objects will do under any given set of conditions. For example, by knowing how the gravitational interaction between the earth and a satellite varies with the distance between them and knowing the position and velocity of the satellite at a given instant, physicists can predict where the satellite will be and how fast it will be

moving at any future instant. (Of course, in any practical situation, interactions from the sun and the other planets as well as all other significant perturbations must be added to the equations). Each of the four fundamental interactions has a different description. For example, the gravitational force depends on the masses of interacting objects; by contrast, the electromagnetic force does not depend on masses but only on electric charges. The magnitude of the gravitational force and that of the electrostatic force both vary inversely as the square of the distance between the interacting objects; but, as we have seen, at equal distances the electrostatic force is enormously stronger than the gravitational force. The

strong nuclear force behaves in a relatively strange manner: its magnitude increases as the interacting quarks get farther apart. While each interaction has a distinct individuality, certain rules are common to all. Indeed, this commonality is what leads some to suspect that one basic interaction might underlie them all. The rules referred to are known as symmetry principles, which relate to symmetries in space and time, or to other, more abstract attributes such as electric charge, parity, or "strangeness." Some of them are obeyed absolutely under all conditions, while others are disobeyed under certain specific conditions and are therefore known as "broken symmetries."

For the purposes of this discussion it will be enough to describe three of the most important symmetries: 1. Space-time symmetry. In writing an equation describing the potential energy of any interaction, we first set up a coordinate system of some kind (usually either Cartesian or spherical). In the case of Cartesian coordinates, the formula for the potential energy will, in general, depend on the three spatial coordinates (x, y, and z), and perhaps on the time as well. If for a particular situation it turns out that the potential energy does not depend on one of thesethe x- coordinate, for example then we may place the origin, or zero, of the coordinate system anywhere along the x-axis without affecting the form

of the equations. In such a case, the potential energy of the system (and therefore the Hamiltonian and the Lagrangian) is said to be symmetrical with respect to the displacement of the origin in the x-dimension. It can be proved mathematically that the law of conservation of momentum in the x- i tmension is a direct consequence of this symmetry. That is, if a force field is such that its potential energy does not depend on the coordinate x, then the momentum of an object in that field will never change in the x-direction. (For example, a car on a flat, horizontal, frictionless track will not change its velocity.) If the potential energy does not depend on either x, y, or z, then the space is symmetrical in all directions, and momentum is conserved in all three

dimensions. This is the empty space of Newton's first law of motion, which states that a body at rest will remain at rest, and a body in motion will remain in motion at a constant velocity. These examples show how spatial symmetries are related to the fundamental conservation laws. Another kind of spatial symmetry results if spherical coordinates are used. These coordinates are generally the radius (distance from the origin), the angle theta (the angle between the north pole and a given direction), and the angle phi (the angular distance around the equator). A system has spherical symmetry if its potential energy does not depend on any of the anglesthat is, if its potential depends only on the radius. In

that event, the angular momentum of the system (mass times velocity times radius) is conserved. That is, regardless of changes taking place in the system, its total angular momentum will remain constant. This is the effect that causes ice skaters to pirouette more rapidly as they pull in their arms; reducing the radius of extension requires the angular velocity to increase in order to keep the angular momentum constant. We see at work here a principle of great generality: every symmetry is associated with a conservation law. This statement follows from strict mathematical proof and does not require experimental verification.

One of the most important symmetries is that in which a system's potential energy does not depend on time, which means that the zero of time may be defined anywhere. This symmetry applies to all interactions between fundamental particles, as well as macroscopic events where there is no friction or where no time-varying forces such as alternating electric fields are applied from outside the system. For situations governed by this time symmetry, it can be shown by solving the relevant equations of motion (Hamilton's equations, for example) that the total energy of the system does not change in time. Conservation of energy is thus a consequence of symmetry in time. In relativistic mechanics, space and time

make up a single four- dimensional continuum. Symmetry of translation and rotation in this four-dimensional space is known as a Poincar symmetry. This symmetry is intimately connected with the laws of conservation of momentum, angular momentum, and energy. The connection between symmetry and conservation laws is of the utmost practical importance: it means that the conservation laws are not simply pragmatic rules that must be tested in every conceivable situation. Instead, we may say that if the interactions between fundamental particles obey certain rules of symmetry, then certain conservation laws follow, and these conservation laws must be obeyed by all matter made of these

fundamental particles. For the purposes of this book, the important symmetry is time symmetry. The mathematical expressions describing the four fundamental interactions between particles do not involve time. Therefore, energy must be conserved in these interactions; furthermore, since all macroscopic events are nothing more than collections of these interactions, conservation of energy may be said to apply to every phenomenon in the universe. Because of this we are relieved of the necessity of testing conservation of energy in every conceivable situation. We now have an easy answer to the question,

"How do you know that some new and unknown kind of machine will not be able to create energy from nothing?" If everything is governed by the four fundamental interactions, then conservation of energy is absolutely obeyed. At this point we must insert a cautionary note. There is a certain temptation for scientists to say, "Nature likes symmetry. Therefore these laws must be absolutely true." This kind of reasoning can lead to major errors. At various times people have assumed certain symmetries to be true for aesthetic reasons and then discovered that these symmetries simply did not hold under all circumstances. For example, in the 1950s it was found

experimentally that conservation of parity, which everybody had assumed to be true, did not apply to events mediated by the weak nuclear interaction. The resulting contretemps led to an increased understanding of symmetry at the fundamental level. Nature does not "like" anything. Nature simply exists. Only people can like symmetries. But liking doesn't make them true, so each symmetry must be tested for every one of the four basic interactions. In the case of a time symmetry, this means testing the law of conservation of energy as it applies to the fundamental particles. A great many experiments of this nature have engaged the attention of physicists

for the past century. Some of them involve the most precise measurements in the history of physics. Consequently, we are justified in believing the law of conservation of energy with an extraordinary degree of confidence. (One experimental test of this law will be described in the next section.) 2. Lorentz invariance. All fundamental interactions must obey an important symmetry principle called Lorentz invariance, which insures that the laws of nature have the same mathematical form in all reference frames traveling at a constant speed. A consequence of this symmetry is that all such reference frames (called inertial frames) are equivalent. It is impossible to do an experiment that will

single out one frame as special. You can't say, for example, that one frame is at rest while all the others are moving. The only important thing is the motion of the frames relative to each other. For example, an electron moving in an electric field created by a charged electrode travels in a path determined by Newton's second law of motion: the electron's rate of change of momentum equals the applied force. In this case the applied force equals the moving electron's charge multiplied by the electric field strength. However, an observer moving relative to the charged electrode sees not only an electric field, but also a magnetic field. The kind of field he perceives depends on his state of motion.

Nevertheless, this motion has no effect on Newton's second law. By using the appropriate equation for the force acting on a moving charge, and using the relativistic expression for the momentum of the object in question, the moving observer will find that Newton's second law of motion holds true in his reference frame as well. A special case of Lorentz invariance is the established fact that the speed of light is the same in all reference frames, regardless of their velocities. The principle of relativity follows from this fact. Experimental verification of this principle is extensive, and in recent years has become quite precise. Summaries of this evidence are included in many books

on relativity.1, 2, 3 The well-known consequences of relativity include the impossibility of any matter, energy, or information traveling faster than the speed of light. This can be proved in a variety of ways. The most convincing is the demonstration that if a message could be sent faster than light from earth to a space ship traveling faster than a certain velocity (but less than the speed of light), the ship could transmit a reply that would arrive at the earth before the original message was transmitted. (See Appendix for details.) This circumstance would violate the principle of causality (the idea that a cause must always come before its effect) and allow the occurrence of typical time-travel paradoxes. For

example, a scenario could be created in which a catastrophe takes place on earth, after which a warning is relayed from earth to a moving spaceship and then back to earth. If the ship is moving fast enough, the warning would arrive on earth at a time prior to the catastrophe. The catastrophe could thus be halted before it happened. Has the catastrophe taken place? If not, then why was the warning sent? Rather than deal with such implausible paradoxes, we say that the thing that causes themfaster-than-light travelis impossible. Despite these paradoxes, a theory has actually been put forward that posits the existence of tachyonsparticles that, because they have imaginary mass, can

travel only faster than the speed of light. This theory was valid to the extent that it made specific predictions about observable consequences; however, since these consequences have never been observed, tachyon theory remains but an interesting and imaginative conjecture. 3. Gauge invariance. Gauge invariance is a symmetry principle applicable to the theory of electromagnetism. The conservation law that follows from it is the conservation of electric charge, a law that says that the net electric charge within a closed system cannot change. That is, if N is the number of negative elemental electric charges and P is the number of positive charges in the system, then the charge number Q = P - N must be a

constant, regardless of the reactions that take place in the system. Conservation of electric charge ensures that certain types of reactions will not take place. For example, a neutron cannot change into a proton without also producing an electron: the charge number, being zero before the reaction, must remain zero after it. Similarly, a photon cannot create an electron without simultaneously creating a positron, even though it possesses enough energy (0.511 MeV) to do so. This is because the total electric charge of the system must be the same before and after the reaction. A number of other symmetry principles and conservation laws apply to the

domain of particle physics;4 indeed, one of the chief concerns of scientists in this area is to determine the nature of the applicable symmetries and to decide which of them are absolute and which are broken. In contrast to the many symmetries that are of importance only on the subnuclear level, those we have discussed in this section are of interest because they apply to macroscopic activities as well. 4. Verification of conservation laws A century ago, to verify conservation of energy physicists had to perform experiments involving every possible kind of machine and process, for at that time no general principles existed that numbered and classified the different possible kinds

of energy. In such an atmosphere, it was logical to wonder if conservation of energy could ever be verified beyond the shadow of a doubt. Even after a million experiments someone could always ask, "How do you know that the next experiment will not turn up a new kind of energy that is not conserved?" The subject of scientific induction hinged on the ubiquitous question: How can it be shown that a law of nature is true in every possible situation when we can only make a finite number of tests? The only answer seemed to be: Falsify as many challenges to the law as you can. But even then one could still not be sure. Now the situation is different. Everywhere we look we see only a limited number of

fundamental particles and a small, finite number of interactions by which they influence each other. If we can show that conservation of energy holds true for the fundamental interactions, then we may confidently assert that it holds true for all phenomena involving those interactions. In that case, the only way to demonstrate a nonconservation of energy would be to identify a new kind of interaction that does not obey conservation of energy. So far, no such interaction has been found. As mentioned in Chapter 1, on only three occasions during this century has the possibility of nonconservation of energy been seriously entertained. Early in the century, the discovery of radioactivity gave cause to wonder how the energy

emitted by radioactive nuclei was created. A more complete understanding of the process revealed that in fact no energy was created at all, but rather that the radiated energy simply resulted from processes converting some of the internal energy of the atomic nucleus into ejected particles. Later, in 1948, Thomas Gold and Fred Hoyle tried to explain the observed expansion of the universe with their "steady-state" theory. This theory, unlike the big bang theory, conjectured that the universe has existed forever, and explained its expansion by postulating the continuous creation of small amounts of matter throughout the universea deliberate violation of energy

conservation. Nevertheless, evidence from astronomical observations continued to uphold the big bang theory, so that fortunately for conservation of energy the steady-state theory fell out of favor. Another reason to question conservation of energy appeared with the discovery of the beta decay of radioactive nuclei. During the 1920s, measurement of the energy of electrons emitted from these nuclei revealed that, incomprehensibly, each electron possessed a different amount of kinetic energy, which ranged from zero to a specific maximum amount. (The maximum depended on the particular isotope undergoing decay.) On the other hand, measurements before and after electron emission showed that nuclei of a

given isotope all lost the same amount of mass. This mass difference, when converted into energy units, coincided with the maximum energy of the emitted electrons. In other words, as far as the nucleus was concerned, each beta decay reaction withdrew the same amount of energy. However, almost all the emitted electrons carried away less than that amount of energy. Where did the unaccounted-for energy go? The simplest explanation was that the reaction responsible for beta decay did not obey conservation of energy. However, while superficially simple, this explanation is deeply complex, violating many fundamental principles. In addition, the spin of the nucleus before and after

beta decay differed by a whole angular momentum unit, while the emitted electron carried off only half a unit. Could angular momentum be disappearing also? Not one, but two fundamental laws seemed to be violated. It was too much to be reasonable. Wolfgang Pauli solved the problem in 1931 by his conjecture that the lost energy and angular momentum were being carried off by massless, chargeless particles which, because of their lack of charge, could not be detected. Enrico Fermi subsequently named this hypothetical particle the neutrino. The neutrino, invented to save the law of conservation of energy (and of momentum and angular momentum), remained undetected until 25

years later, when experiments by Frederick Reines and Clyde Cowan did find direct evidence of the existence of neutrinos emitted in great quantities from nuclear reactors at Los Alamos. (Direct evidence, in this context, meant an experiment detecting the interaction of a neutrino with some kind of measuring device.) During those 25 years, even though neutrinos had never been detected, belief that energy and angular momentum could not simply disappear was sufficient incentive for physicists to base the entire theory of beta decay on the existence of this elusive particle. This story is a prime example of belief in a general principle leading to correct results. However, this does not always happen.

Belief in conservation of paritya property of nuclear particlesturned out to be unfounded in the case of the weak interaction. This demonstrates the importance of verifying each conservation law for each interaction. Modern experiments have established the validity of conservation of energy for the fundamental interactions to an astonishing degree of precision. This principle has been verified to within one part in 1015 for both the electromagnetic and strong nuclear interactions, which means that experiments are now so sensitive that they can detect a loss of one unit of energy out of a thousand million million units; within that limit of error no loss of energy has been found.

To put the issue in comprehensible terms, imagine that you are typing a manuscript at 100 words per minutea fairly rapid clip. Assume that your accuracy is so high that you make an error only once every 1015 words. If you typed nonstop, it would be about 20 million years before you made an error. This same accuracy is now possible in measurements of energy. Most nonphysicists do not appreciate the extremely high precision with which modern instruments can measure energy differences. For this reason, some involved in parapsychology research are not sensitive to the logical consequences of their belief in paranormal effects such as ESP (obtaining information at a distance or transmitting information to a

distant receiver) or psychokinetics (moving objects at a distance), both of which require either nonconservation of energy or instantaneous transmission of energy through space by nonphysical means.

FIGURE 9. Energy-level diagram

illustrating emission of beta particles from a mother nucleus, followed by the cascade emission of two gamma ray photons. To clarify how it is that conservation of energy can be so precisely verified, it is worthwhile to outline one of the experiments that has been used to validate the law on a nuclear level. This experiment makes use of a phenomenon known as the MOssbauer effect, discovered by Rudolf MOssbauer in 1957. Although this effect is basically a phenomenon of nuclear physics, it has been put to use in a large variety of chemical, geological, and biological investigations, because it generates photons whose energy is more precisely

determined (has less spread or uncertainty) than those from any other source. This precision makes it an unparalleled means for detecting minute changes in energy. The MOssbauer effect has been used, for example, to measure the microscopic increase in the energy of photons falling from an elevated position that is predicted by Einstein's general theory of relativity. The MOssbauer effect utilizes gamma ray photons emitted by certain radioactive isotopes. Following the emission of beta particles from the nucleus of a parent isotope (A), gamma rays are emitted when the resulting daughter nucleus (B) is left in an excited state that is, when it has more energy than it needs for a stable

existence (see Figure 9). Excited states (or energy levels) are a consequence of the wave nature of the particles within the nucleus; they are essentially a resonance effect. A piano string that vibrates with a full wavelength between its ends rather than the normal half-wavelength might be considered to occupy a similar excited state. When a nucleus finds itself in an excited state, it must sooner or later descend to its stable ground state by emitting one or more photons. If a nucleus has two excited states, it may jump from the second to the first and then down to the ground state, emitting two photons in cascade. This is precisely analogous to the emission of light from an atom when its electrons

descend from excited states in the outer orbits to the lowest available level, that is, to the stable ground state. The major difference is that in nuclear levels more energy is involved, so that shortwavelength gamma ray photons are emitted rather than photons of visible light. The energy of each of these gamma ray photons is equal to the difference between the energy of the two levels involved in their emission. If, as in Figure 9, the first excited state has an energy of 600 kiloelectronvolts (keV), then the emitted photon has an energy of 600 keV. A nucleus usually remains in an excited state for only a very short time (less than a

microsecond). Due to the Heisenberg uncertainty, this short lifetime imparts a slight variability to the energy of the excited state, so that photons emerge with a spread of energies that can be calculated using the formula energy spread = Planck's constant / time uncertainty where Planck's constant is 6.626 x 10-34 joule-seconds, or 4.14 x 10-15 eV-sec. If, for example, the 600-keV excited state has an average lifetime of 10-8 sec, then the energy spread is (4.14 x 10-15 eV-sec ) / 10-8 sec= 4.14 x 10-7 eV.

This mean that the fractional uncertainty in the energy is (4.14 x 10-7 eV.) / (6.00 x 105 eV )= 0.7 x 10-12, or approximately one part out of 1012. This simple calculation tells us that the photons emitted from a given level should be remarkably uniform in their energy. However,it is difficult to make a detector precise enough to discriminate between photons differing in energy by just one part out of 1012. There is one reaction that can do the tricka reaction that is just the opposite of gamma ray emission, namely, resonance absorption of gamma rays. In this reaction, a gamma ray photon striking

a target is absorbed by an atomic nucleus in its ground state, thus raising the nucleus to its excited energy state. In order for this to take place, the incoming photon's energy must exactly match the energy of the excited state, since the absorbed photon deposits all its energy in the nucleus and the nucleus may exist only in very sharply defined states with specific amounts of energy. There is simply no place else for the energy to go. Gamma rays can lose energy in other ways as they pass through matter (photoelectric effect, Compton effect, etc.), but if resonance absorption is to take place, the energymatching criterion must be met. In general, this means that resonance absorption will take place only if the

photons from isotope A in Figure 9 are passed through an absorber made of daughter isotope B. Only in this manner can we be assured that the photon energy will exactly equal the energy of one of the excited states in the absorbing material. Even then, however, one more impediment to normal resonance absorption remains to be overcome: The photon emitted from isotope A has a certain momentum. In order to satisfy conservation of momentum, the emitting nucleus must recoil with an equal and opposite momentum, so that the total momentum of the system remains zero. Meanwhile,energy must be conserved, so that the small amount of energy that the photon imparts to the recoiling nucleus is ubtracted from its own energy. It turns out

that this change in the photon energy is greater than the spread in the energy of the excited state. As a result, the emitted photons do not have enough energy to engage in resonance absorption when they pass through a piece of the daughter isotope. So sensitive is the resonance absorption effect that the very act of creation eliminates the photon's chance of detection. During the 1950s, a number of ways of putting the photon back into resonance were developed. The simplest and most successful method was put forward by Rudolf MOssbauer, who won the 1961 Nobel Prize in physics as a result.5 The basis of the Mdssbauer effect is a technique to reduce the recoil of the

gamma-emitting nuclei by embedding them in a suitable crystal lattice. If the energy of the emitted photons is low enough, the recoil energy is too small to cause vibrations of the crystal lattice. As a consequence, the emitting nucleus behaves as though it were firmly attached to a highly rigid crystal with an essentially infinite mass, making the recoil energy infinitesimal. Not all radioactive isotopes qualify for use in the Mdssbauer effect, since the primary requirement is the emission of a fairly low-energy gamma ray. However, at least 60 suitable isotopes are known, making the MOssbauer effect widely useful. One of the most frequently used isotopes is Co57, a radioactive isotope of cobalt

made by bombarding iron with deuterons in a cyclotron, causing the following reaction: Fe56 + H2
----------->

Co57 + n

The nucleus of Co57 contains too many protons to be stable. It therefore has a tendency to absorb an electron so that one of the protons may change into a neutron. This processthe opposite of beta emission is called electron capture. It occurs because, according to the rules of quantum theory, the innermost orbital electron of the cobalt atom has a probability of being found inside the nucleus, and while inside it has a chance of being captured by a proton. The result is an element with one less positive

charge in the nucleus but with the same atomic mass:

FIGURE 10. Energy level diagram for the transition Co57 + e- ----------> Fe57

The half-life for this process is 270 days. The Fe57 nucleus resulting from this reaction is in an excited state with an energy that is 137 keV above its ground state (See Figure 10). It rids itself of this energy by emitting one or more gamma ray photons, thereby descending to its ground state. In about 9% of the transitions, the Fe57 nucleus goes directly to the ground state and emits a 137-keV photon. The rest of the time, it cascades through an intermediate level at 14.4 keV, emitting two photons in sequence: one with an energy of 123 keV, and another with an energy of 14.4 keV. The 14.4-keV gamma ray is the one in which we are interested, for it has a low

enough energy to take part in the Mdssbauer effect. To use these gamma rays, we plate the Co57 source onto thin iron foils, using heat treatment to embed the cobalt atoms within the iron crystal lattice. Our purpose now is to measure the energy spread of the 14.4keV photons. Resonance absorption is the only means of distinguishing such fine variations in energy. To use it we put an absorber containing

FIGURE 11. Apparatus for Doppler shift measurement of resonance absorption

FIGURE 12. Data from a MOssbauer effect experiment, showing resonance

absorption curve. Fe57 in the path of the 14.4-keV photons. Since the photoelectric effect also causes absorption of photons, a way must be found to separate the broad photoelectric absorption from the resonance absorption that occurs only in a very narrow energy range. The method of accomplishing this is delightfully simple and is illustrated in Figure 11. The gamma ray source is mounted on a moving carriage; the absorber is fixed. A detector on the far side of the absorber counts the number of photons transmitted through the absorber in a fixed interval of time. The carriage moves slowly back and forth. As the

source moves towards the detector, the photon energy is increased by a small amount due to the Doppler shift. Similarly, as it moves away from the detector, the photon energy decreases. A plot of counting rate versus source velocity (see Figure 12) displays a dip at zero velocity evidence of resonance absorption. That is, when the source is at rest relative to the absorber, absorption increases. So sensitive is the resonance effect that a velocity of just a few millimeters per second is enough to make it disappear. The width of the resonance curve, translated into energy units, measures the spread of photon energies. This spread may be greater than the width of the 14.4keV energy level itself, because thermal

motion of the crystal lattice can introduce additional Doppler shifting. However, cooling of source and absorber in liquid nitrogen reduces this thermal spreading to a minimum. The energy spread of the emitted photons is calculated from the width of the resonance curve (defined as the full width at half maximum). This width tells how much the source velocity (v) can be varied without reducing the resonance absorption to less than half its maximum value. If the energy of photons and of the 14.4-keV level had no spread at all, the slightest motion of the source would destroy the resonance. The velocity with which the source can be moved without eliminating

the absorption allows us to calculate just how wide the energy spread is. In fact, the fractional spread (the spread in photon energy divided by the photon energy itself) is found to be simply v/c, where c is the speed of light. (Here we have used the formula for the Doppler- shifted frequency, together with the fact that the photon energy is proportional to that frequency.) Taking the curve width to be about 1 mm/ sec (10-3 m/ sec), we find the fractional energy spread to be approximately 3 x 10-12. That is, the photons emitted from the Fe57 have an energy that is uniform to within 3 parts out of 1012. This result may be obtained in another way, using the measured lifetime of the

14.4-keV level. From the Heisenberg uncertainty principle we know that the uncertainty in the level's energy is equal to Planck's constant divided by the mean lifetime of the excited state. This lifetime is easily measured by looking at the two gamma ray photons emitted in cascade from the Fe57 and sending the output of the two detectors to a coincidence circuit. If the two photons are emitted simultaneously, then coincidences will be obtained. However, if the 14.4-keV photon is delayed relative to the 123-keV photon, then it is necessary to put a compensating time delay into one leg of the coincidence circuit to bring the two signals together and produce a coincidence. In this way, it is found that the average lifetime of the 14.4-keV level

is 1.4x10-7 sec. Putting this number into the Heisenberg uncertainty relationship, we find the fractional energy uncertainty to be about 2 x10-12, which agrees roughly with the measurement made using the MOssbauer effect. The number obtained using the Heisenberg uncertainty is a theoretical prediction, calculated from the level lifetime. The corresponding number obtained with the Mssbauer effect is an experimental verification, because it directly measures the variation in energy. Let us examine the implications arising from this experimental verification. The observation is that 14.4-keV photons emitted by the Fe57 source are absorbed

by an Fe57 absorber and that the variation in photon energy, as calculated from the absorption curve, is only a few parts out of 1012. There are really three parts to this observation: 1. Each of the Fe57 nuclei has an excited state 14.4 keV above the ground state. The precise energy of this state is determined by the interactions among the Fe57 nucleons, which are mediated by the strong nuclear force. These interactions determine the possible forms of the wave functions within the nuclei and the possible types of resonances that can take place in these wave functions. The fact that the resonance curve is very narrow indicates that all of the Fe57 nuclei in the

source have excited states at almost exactly the same energy, differing only by a few parts in 1012. 2. As the protons in the excited state descend to the ground state, they are forced to emit photons through the electromagnetic interaction. Again, the narrowness of the resonance curve tells us that the energy gained or lost in this interaction must be less than a few parts in 1012. 3. The photons travel through space and encounter the absorber atoms. Each photon that engages in resonance absorption disappears, giving up its energy through the electromagnetic interaction to a nucleus that is thereby

raised to its lowest excited state. Once more, the narrowness of the resonance curve tells us that the energy lost (or gained) by a photon traveling through space must be less than a few parts in 1012. In addition, the energy of the first excited state in each absorber nucleus must be the same as the energy of the first excited state in the source nucleus. We have thus found that, within a very small margin of error, all Fe57 nuclei have the same 14.4-keV excited state, and that the photons taking part in the reaction neither gain nor lose energy during emission, transmission, or absorption. Thus it is experimentally verified that events mediated by the strong nuclear and electromagnetic interactions obey the law

of conservation of energy to within a few parts out of 1012. Since the energy spread of an excited state is inversely proportional to the lifetime of that state, the MOssbauer effect experiment can verify conservation of energy to an even greater degree of precision by using isotopes with longerlived states. The isotope Zn67 has a 93keV level with a lifetime of 9.4 microseconds. The width of this level is expected to be 5 x 10-11 eV, according to the Heisenberg uncertainty. The measured width, using the Doppler shift technique, is found to be several times greater than the theoretical value.6 The amount of broadening depends on the

method of preparing the crystalline source, and so this discrepancy can be blamed on interactions between the nuclei and the crystal lattice in which they are suspended. In any event, these experiments show that the photon energy is uniform to within a few parts out of 1015, thus verifying conservation of energy to that order of precision. Experimental proofs are not likely to get much better than one part out of 1015, since interactions between nuclei and surrounding electromagnetic fields inevitably cause a certain amount of blurring of the sharpness of nuclear energy levels. This is simply an example of the ultimate limitation of all experiments: no matter how precise the experiment, when

you go looking for smaller and smaller signals noise in the environment eventually becomes larger than the signal being sought. While Poincar symmetry predicts perfect conservation of energy and we have no reason to disbelieve this prediction pragmatism insists that we not make claims greater than those verified by experiment. Therefore, our conclusion is that conservation of energy holds true for the strong nuclear and electromagnetic interactions to within a precision of one part out of 1015. 5. Are there new and unknown interactions? other than the four we have been discussing: gravitational,

electromagnetic, strong nuclear, and weak nuclear. Physicists periodically make serious efforts to discover evidence that would indicate the existence of new and unknown forces. Currently, for example, Ephraim Fischbach and colleagues at Purdue University are suggesting a new kind of force, similar to the gravitational force, whose magnitude is proportional to the masses of the interacting objects.? However, this force differs from gravitation in a number of important ways. First, it is a repulsive force between particles of ordinary matter, about a hundred times weaker than the gravitational attraction at short distance. Second, and most important, at distances greater than a few hundred meters it falls off exponentially rather than according to

the inverse- square law. Therefore it is a short-range force. Because of that its effects are not noticeable at planetary distances, but in certain experiments done on the surface of the earth it might be possible to observe some small effects. One such experiment is the Eotvos experiment, in which the gravitational acceleration of falling objects is measured in order to see if the amount of acceleration depends on the composition of the falling object. Standard gravitational theory (the principle of equivalence) says that all objects should fall at the same rate in a vacuum, whereas the existence of the new proposed force would make the acceleration dependent on the baryon number of the material being

tested. (The baryon number is the total number of neutrons and protons in the nucleus.) Reanalysis of old experimental data from the original Eotvos experiment of 1905 suggests the reality of the proposed short- range repulsive force. At the present time, however, physicists do not agree on this new theory. As is frequently the case, statistical data of uncertain merit bearing on effects too small to be observed give results that appear positive to those who want to believe them. More experiments and more verification will be required before the case can be settled. Clearly, then, it is possible for new and as yet unobserved forces to exist.

Furthermore, the existence of such forces, if demonstrated, would be gladly accepted by physicists, since it would provide new work for particle theorists. Physicists will also tell you, however, that the chances for the discovery of a strong new longrange force interacting with ordinary matter under ordinary conditions are vanishingly small. New forces may be found operating in reactions taking place at extremely high energies, such as those that prevailed during the first microsecond after the big bang, or those that may be attained in particle accelerators as yet unbuilt. Alternatively, new and extremely weak forces may be found that produce such insignificant effects that they have not yet been noticed. Overall, though, there is little likelihood of a new long-range

interaction being discovered that is strong enough to produce large and easily observable effects. For if the effects were easily observed, they would have been observed already. Physicists for the past century have spent their time observing particles moving about in every imaginable circumstance, comparing what they do with what theory says they should do. The motion of electrons in electromagnetic fields agrees in minute detail with the predictions of electrodynamics, both classical and quantum. Planetary motions agree to within a very small degree of error with the predictions of the general theory of relativity (the modern theory of gravitation). In no case does evidence exist of charged particles performing

actions that cannot be accounted for by the electromagnetic interaction; likewise, if relativity is not enough to explain the observed motion of the planets, the discrepancy is at the limit of our observations. (There are some discrepancies between theory and experiment in reactions at extremely high energies, but these are related to problems in quark theory rather than to inadequacies of the theory of electromagnetism.) In this book the underlying question is whether the actions of living matter can be understood entirely on the basis of natural phenomena, or whether it is necesary to assume the existence of paranormal or supernatural phenomena. In this chapter we have shown that of the four known

fundamental forces, the electromagnetic interaction is the only one capable of initiating activity on the biochemical level. (Even when radioactivity or cosmic rays do produce biological effects, the end biological result always comes about by means of rearrangements of atomic electrons.) The question of living matter and its behavior can be answered in one of four possible ways. We divide the answers into two categories physical and nonphysical. Physical 1. The behavior of living matter (including thought and consciousness) can be

explained entirely by the behavior of fundamental particles acting under the influence of the four forces, chiefly the electromagnetic. (The gravitational interaction is involved in inertia, and thus determines the mass of the interacting particles. The strong nuclear interaction has some minor effects on the orbital electrons of atoms.) 2. The behavior of living matter can be explained only by invoking new and presently unknown interactions that follow uniform and natural laws. 3. The behavior of living matter can be explained only by invoking new and unknown principles operating within the inner structure of quantum theory (e.g.,

instant transmission of information through space-time based on the connectivity of wave packets). Nonphysical 4. The behavior of living matter can only be explained in terms of phenomena that fall outside the domain of physics (e.g., in terms of "psychic energy," lan vital, or synchronicity). Most scientists base their work on the first answer. In the absence of evidence to the contrary, they assume that the simple model of particles interacting through four fundamental forces can in principle explain the behavior of living matter. The curve of knowledge accumulation during

the past century reinforces this belief. It is not enough only to understand what we know at any given time; it is also important to have a historical overview of the development of knowledge. Our understanding of genetics, cell development, and neurology has evolved on a strictly physical basis; there has been no need to invoke supernatural entities. Indeed, the evolution of biology has been a movement away from the elan vital and psychic forces of the past. With the elimination of invisible and supernatural forces from its vocabulary, science has been free to accumulate knowledge at an exponential rate. As we have shown, answer (2) appears to be superfluous, since any new forces

found will be too weak or too shortranged to have any effect on matter at the biochemical level. Similarly, answer (3) can be faulted on the basis that nothing in quantum theory suggests that any observable effects could occur without a physical interaction that obeys the conservation laws, the principle of relativity, and the logical requirement that a cause precede its effect in time. While some writers have misinterpreted quantum theory to allow instantaneous communication at a distance, a serious examination of the situation dashes their hopes (as was shown in Chapter 3). Answer (4) carries us out of science and into pseudoscience, the paranormal, the supernatural, and religion. This answer is

most commonly invoked to answer questions about the nature of life, thought, consciousness, and the soul. It is implicit in dualistic theories of the mindtheories that consider the human mind to be a nonphysical entity separate from but parallel to the physical brain. Since the supernatural is outside the domain of science, it cannot be discussed on a scientific level. Yet some things must be said: First, it is clear that physical interactions cannot account for the claimed properties of most paranormal phenomena. Experimenters in the realms of telepathy, clairvoyance, precognition, and telekinesis show no regard for conservation of energy, the relation of

signal strength to distance, or the fact that a signal must be transmitted before it is received (the principle of causality). Conservation of energy enters this discussion because energy is required to transmit information from one place to another. The signal must possess enough energy to activate electrons in the nervous system receiving the signalenough, at least, to create consciousness of a thought. Furthermore, all forms of transmitted energy follow the inverse square law: the amount of energy passing through a square centimeter of the receiver decreases in proportion to the square of the distance from the transmitter. This is simply because as energy spreads out in space

and all energy, even a laser beam, spreads out to some extentthe same amount must cover a greater area. Therefore a signal transmitted by any normal form of energy gets weaker with distance. Paranormal signals, according to the claims of parapsychologists, do not. By the normal laws of physics, no signal can travel faster than the speed of light; moreover, since all signals propagate forward in time, no signal can be received before it is transmitted. Parapsychology experiments are indifferent to such considerations. All three of the above limitations are casually ignored in a historic paper published by a famous parapsychologist,

the astronaut Edgar Mitchell.8 Mitchell's experiment involved transmission of information (symbols on a set of ESP cards) from an astronaut orbiting the moon to a receiver on earth who guessed the symbols and calculated correlations between the sender's symbol patterns and his own. There was no effort to synchronize transmission with reception, so that no one knows whether transmission took place before or after reception. Such details simply did not concern the experimenters; nor, for that matter, did the distance between the earth and moon. The attitude of many parapsychologists toward such questions has been voiced by another prominent researcher in the field, John Beloff, who freely admits to the

problem arising from the laws of physics. He also recognizes information problems, such as the difficulty of understanding how the receiver detects a particular message in the presence of multitudes of others traveling through the same region of space. His response to these difficulties is: "It follows, if I am at all on the right track, that we must abandon hope of a physical explanation of psi.'") (Psi is an abbreviation for parapsychological phenomena.) If we do not seek a physical explanation for psi, then the question is: "Do nonphysical ways of communication exist?" Translated to the atomic level, the question becomes: "Can electrons in the nervous system begin to move and

produce thoughts without the mediation of a physical interaction?" The latter question must be asked because our current understanding of neurophysiology associates thought with electrochemical actions in the nervous system, a point of view that is borne out by experiment and observation. Even if one holds to a dualistic theory of mind, at some point one must explain how the nonphysical mind interacts with the electrons of the nervous system to start the physical brain working. Belief that thoughts exist totally independently of the nervous system is a nonphysical theory unsubstantiated by observation. It thus lies outside the domain of science. Clearly, the entire apparatus of

parapsychological research is devoted to demonstrating the existence of nonphysical phenomena. If such phenomena were actually demonstrated, we would have to decide whether to call them physical. What kind of science would they represent? Would they be included in science? Putting such hypothetical questions aside, however, we must first ask the pragmatic question: Has any kind of nonphysical phenomenon in fact been demonstrated? The most serious reviews of the data indicate that it has not.10," The past century of effort has culminated in experiments that cannot be replicated, data that cannot be duplicated, or in data that has been consciously or unconsciously falsified; meanwhile, researchers attempt to magnify small statistical fluctuations

into a meaningful body of knowledge. Whenever preliminary experiments seem to have given positive results, duplication with stricter controls and/ or better statistics has always failed to corroborate the original "success." These failures reinforce the primary assumption of the skeptical- realistic attitude: everything that happens in nature is the result of physical interactions between fundamental particles. More reinforcement arises from a comparison of the histories of the physical and nonphysical approaches to the study of life. The past century has seen an exponential growth in our understanding of life processes based on physical principles, and although this study is still

at an early stage, we are encouraged to believe that the next century will bring continued growth and a correspondingly greater understanding of life. By contrast, no such growth in our understanding of life based on nonphysical processes has occurred.

Notes 1. R. H. Dicke, The Theoretical Significance of Experimental Relativity (New York: Gordon & Breach, 1964). 2. R. Resnick, Introduction to Special Relativity (New York: Wiley, 1968). 3. M. A. Rothman, Discovering the

Natural Laws (New York: Doubleday, 1972), Chap. 7. 4. M. A. Rothman, "Conservation Laws and Symmetry," in The Encyclopedia of Physics, R. M. Besancon, ed. (New York: Van Nostrand Reinhold, 3rd ed., 1985). 5. A bibliography of early papers on the MOssbauer effect is to be found in Massbauer EffectSelected Reprints, AAPT Committee on Resource Letters (New York: American Institute of Physics, 1963). 6. H. de Waard and G. J. Perlow, "Mssbauer Effect of the 93-keV Transition in Zn-67," Physical Review Letters, 24 (1970), p. 566.

7. "Reanalysis of Old aitveis Data Suggests 5th Force . . . to Some," Physics Today, October 1986, p. 17. 8. E. D. Mitchell, "An ESP Test from Apollo 14," Journal of Parapsychology, 35 (1971), p. 89. 9. J. Beloff, "Parapsychology and its Neighbors," Journal of Parapsychology, 34, p. 129. 10. M. Gardner, Science: Good, Bad and Bogus (Buffalo, N.Y.: Prometheus Books, 1981), Chap. 19. 11. R. Hyman, "A Critical Historical Overview of Parapsychology," in A Skeptic's Handbook of Parapsychology, P.

Kurtz, ed. (Buffalo, N.Y.: Prometheus Books, 1985).

5. LAWS OF PERMISSION AND LAWS OF DENIAL 1. The Fundamental Laws of Physics The laws of physics are human descriptions of events that take place in nature. Since they are human descriptions, they are often redundant: several laws under different names may describe the same set of events. (For example, Newton's second law of motion, Hamilton's equations, and Lagrange's equations all say the same thing in different ways.) The aim of writing a law of physics is to describe an infinite set of possible events in a concise way, using a small number of fundamental concepts.

Physical laws are generally written in the form of mathematical equations. Solutions of these equations provide us with predictions of what will happen in the future if we start with a given set of circumstances. It is this ability to make accurate predictions that makes physics an "exact" science. Although, as we shall see, there are numerous circumstances that forbid exact predictions, one special class of predictions has a reliability so high that disbelief in it would be a sign of aberrant skepticism. The most fundamental of the natural laws are those that allow us to predict the behavior of individual particles acting under the influence of any of the four interactions described in the previous

chapter. (Note that a planet may be treated as a particle when we are computing its motion in a gravitational field.) Newton's second law of motion was the first of the predictive laws. It states that if a known force is applied to a given mass, then that mass will change its state of motion in a specific way. That is, its acceleration will be proportional to the applied force. Knowing the present position and velocity of the mass, we can predict its future position and velocity. Other versions of the same law of motion, as previously discussed, make the same predictions by using the known variation of the potential energy (or the Lagrangian or the Hamiltonian) of the system as the interacting bodies change position.

Quantum mechanics, on the other hand, does not permit us to predict with precision what a single object will do as the result of an interaction. Schroedinger's equation, or its equivalents, allows us to predict only the relative probability of various outcomes. The solutions of the equation tell us (for example) that if one electron collides with another electron, there is a 20% probability it will go in one direction (or range of directions), a 10% probability that it will go in another direction, and so on. Alternately, we might prefer to compute the average number of electrons scattered within a given range of angles when a million identical electrons collide with a target particle. The equations of quantum mechanics tell

us what is allowed to happen in a given circumstance, even if they do not tell us exactly what will happen. These equations tell another story at the same time. This story might at first appear to be a non-story, since it concerns the kinds of actions that are not allowed to happen. However, while you might think this information would have little significance, we shall presently see that it is of supreme importance. The most important example of a nonallowed event is this: A number of objects are moving about within a closed system performing numerous actions. Whatever they are doing, we know with a high degree of certainty that the total amount of

energy in the system will not change as time goes on. The amount of energy will be the same every time it is measured, as long as the system is closed. This means that no actions are allowed which involve a change in the amount of energy in the system. In a similar manner, the equations tell us that no actions within the system may change its total momentum or angular momentum. For example, when a nucleus changes its state by emitting a gamma ray photon, the photon takes away not only energy but also a unit of angular momentum. This limits the change of state to certain "allowed transitions" in which the angular momentum of the nucleus changes by one unit. All other changes are called

"forbidden transitions." The laws of nature thus divide all actions into two categories: those which are allowed, and those which are forbidden. While some actions are more allowed (i.e., have more probability) than others, those which are forbidden are absolutely forbidden. Because of this division of labor, it becomes convenient for us to classify the laws of nature into two general categories: laws of permission and laws of denial. Laws of permission are those that tell us which actions are allowed, while laws of denial tell us which actions are forbidden.

Among the laws of permission are those described in the preceding paragraphs: Newton's second law of motion (or any of its equivalents in classical mechanics) and the equations of quantum mechanics: Schroedinger's, Dirac's, Heisenberg's, etc. These laws tell us what is allowed to happen or what may happen under a given set of conditions. The laws of denial include the familiar conservation laws: conservation of energy, of momentum, of angular momentum, and of electric chargelaws associated with the symmetries discussed in Chapter 4. These assure us that in a closed system no action can take place that changes the system's total energy, momentum, angular momentum, or electric

charge. We may also add the requirement that all events obey the principle of Lorentz invariance, from which the conclusions of special relativity follow: no object, energy, or information can travel faster then the speed of light. The laws of denial mentioned above derive from the nature of the interactions between fundamental particles. In practice, the laws of denial are embedded in the laws of permission, so that the two are not really distinct; we separate them here for purposes of convenience and emphasis. Every predictive law, whether it is Newton's second law of motion or Schroedinger's equation, contains a mathemetical term that describes the nature of the force propelling the particles

within the system. This force term implicitly contains the symmetry properties determining the kind of actions that are forbidden within the system, so that Schroedinger's equation, for example, tells us both what is allowed to happen and what is not allowed to happen. Thus when we speak of laws of permission and laws of denial, we are simply focusing our attention on different aspects of the same equation. A law of denial of a different sort is the principle of causality, which states that in any observed sequence of causally related events, the cause always comes before the effect. In other words, it is impossible for an effect to appear earlier in time than its cause. This implies that no force,

interaction, energy, or information can travel backward in time. (True, in quantum theory one may speak of a positron being an electron going backward in time, but this is simply a convenient way of visualizing the effect of a certain minus sign appearing in the equation.) Nothing observable ever goes backward in time. The law of causality is somewhat different from the other laws of denial, in that it does not refer to measured quantities, but rather to sequences of events. We believe it not simply because all our physical observations verify it, but because not to believe it would require admitting the possibility of certain logic-defying paradoxes. The time- travel paradox, a

classic in science fiction, forces us to consider what would happen if you could travel back in time so as to prevent the meeting between your father and your mother. In that event you could not be born. But if you were not born you could not travel back in time to prevent that fateful meeting. But then you would be born, etc., etc. Do you exist, or don't you exist? The paradox cannot be untangled. Because any transmission of matter or information from the future to the past permits the occurrence of this sort of paradox, we have to believe that the universe is structured so that observable interactions can only propagate their effects forward in time. Curiously enough, the elementary

interactions themselves operate independently of the direction of time. That is, the equations describing interactions between objects contain no information that distinguishes the future from the past. Newton's second law can be used to trace the motion of an object back into the past as well as forward into the future. A film of two billiard balls colliding looks just as plausible running backward as it does running forward. The only way we can tell the difference is to look at the motion of many objects together. A film of a cue ball breaking a triangle of billiard balls would appear very strange run in reverse. We never see a horde of billiard balls coming together to form a perfect triangle and then spitting one ball toward the cue. Numerous

examples of this kind convince us that macroscopic, observable events always progress in one direction: from past to future. To describe the fact that time runs only in one direction, we have come to speak of an "arrow of time" that always points from the past to the future. This arrow represents our awareness of the normal sequence of events. There is a great deal of evidence to support the belief that the universe does have a preferred direction of time. At least seven independent sets of observations support this belief (including the billiard-ball example given above, as well as the argument based on the timetravel paradox).1

We now put together two facts: all interactions propagate with a velocity equal to or less than the speed of light, and all observable events take place in a sequence progressing from past to future. When an event occurs, all of the effects radiating from it must fall within a "light cone" in four-dimensional space-timea light cone bounded by a beam of light traveling away from the originating event toward the future.2 The universe outside that light cone is unreachable by any interaction producing observable effects. That means that if I am transmitting a radio signal toward the star Alpha Centauri, my signal cannot reach a planet in that system until 4.3 years have passed, and I cannot receive a reply from that planet until 8.6

years have transpired on my calendar. There is no way I can receive an instantaneous reply. (See Appendix for proof.) It should be mentioned at this point that the arrow of time and the causality principle also forbid precognitionknowledge of what will happen in the future. For to know the future implies that information can travel backward in time from the future to the present, which is not allowed. The first and second laws of thermodynamics may also be included among the laws of denial. These differ from the laws previously mentioned in that they refer to properties of complex systemssystems containing large

numbers of moleculeswhereas the others refer to any kind of system, simple or complex. The reason is that the laws of thermodynamics refer to quantities such as temperature, thermal energy, and entropy, which are either average or total properties of large ensembles of particles. The first law of thermodynamics is usually stated in this form: When heat is applied to a system (e.g., to a steam engine), the change in that system's internal energy equals the amount of thermal energy the system absorbs minus the amount of mechanical work it performs. But once we recognize that thermal energy is nothing more than the aggregate kinetic energy of all the particles in the system, the first law of thermodynamics is seen to emerge from

conservation of energy. It is simply a restatement of that law for the special case of heat engines, and tells us nothing that we did not already know. The second law of thermodynamics, on the other hand, does tell us something new. It is derived entirely from the average behavior of a large number of particles over a period of time, and so applies to systems containing numerous parts, such as a gas with many molecules, a deck of cards, or a human cell. This law refers to a property of systems called entropy, which, in its simplest definition, is proportional to the amount of disorder in the system. In the language of information theory, entropy may be considered a measure of our ignorance of the state of

the system's particles: the greater the disorder, the greater our ignorance of what each particle is doing. Entropy, then, is equivalent to disorder and ignorance, defined in appropriate mathematical terms. The second law of thermodynamics can be stated either as a law of denial or a law of permission. As a law of denial, it forbids all actions that may decrease the total entropy of a closed system. As a law of permission, it allows the entropy of a closed system either to remain constant or to increase. That is, in closed complex systems, all actions tend to increase the disorder (and our consequent ignorance) of the total system.

For example, the air in a room is more orderly when collected neatly in one corner than when it is distributed evenly, just as the dust in the room is more orderly when it is all collected in the wastebasket. However, one never sees the air in a room collect itself into a corner, leaving a vacuum in the rest of the room. Left to its own devices, air always arranges itself into a condition of maximum disorder or entropy. This observed condition is easily explained in terms of theory: if a number of particles are moving around at random, it is much more probable that they will be spread out than that they will be concentrated in one place. It must be understood that the second law of thermodynamics is true only on the

average. There may be fluctuations in the arrangement of the air molecules in a room so that for a brief period some of them are arranged in a more orderly fashion than before (especially at low temperatures, near the condensation point). However, on the average, the molecules will arrange themselves in such a way that their disorder, and hence their entropy, has the maximum value allowed by the conditions of the system. It must also be understood that there may be mechanisms within a system that enable it to increase the order within a small part of itself. For example, if I were to throw a deck of cards on the floor, the disorder and entropy of the cards would spontaneously increase. I could then pick

up the cards and arrange them in order, thereby decreasing their entropy. Does this violate the second law of thermodynamics? Not at all. The law applies only to isolated (or closed) systems, which the cards in this sense are not, since they are receiving energy and information from me. The total isolated system here must include my body, my nervous system, the fuel that I burn, and the exhaust that I emit. The entropy of this total system must increase, even though the entropy of the cards (an open subsystem) may decrease. A great deal of misunderstanding and confusion surrounds the second law of thermodynamics. Many polemicists attempt to prove that living matter cannot

have evolved from nonliving matter because that would require a spontaneous ordering of simple molecules into complex ones, contradicting the second law. Henry M. Morris, author of several books on creationism, states: "If language is meaningful, evolution and the Second Law cannot both be true. "3 While recognizing that the earth is not a closed system, since energy comes to it from the sun, his claim is that even in an open system it is not possible for organic molecules to organize themselves into organisms of increasing complexity simply through random chemical processes. In his view, the universe was created in a highly organized state and has been going downhill ever since. In so arguing, he and many others assume that

chemical reactions are simply random events and that the universe has not lasted long enough for random collisions of molecules to produce a living organism. The answer to this is that even if molecular collisions are random, the dice are loaded. If two molecules attract each other strongly, they will collide and stick together much more often than two molecules that do not. In other words, mere chance is not the only factor that determines whether simple molecules will arrange themselves into complex ones. Molecular reactions take place at a rate that depends on the structure of each interacting molecule and the quantummechanical laws that govern their dynamics. The probability that two

molecules will stick together if they collide is determined by the state of their orbital electrons, which means that some reactions are far more probable than others. In forming RNA and DNA molecules, for example, the amino acids that are the building blocks of these complex structures link together in very specific ways determined by the affinity of some groups for others. The fallacy of calculations that attempt to prove that living organisms could not have developed through natural, "random" processes is that they assume every reaction has an equal probability. In actuality, complex molecules are built up by following the path of greatest probability. The process is in fact not random at all.

A description of how nature may, without outside assistance,construct complex molecules starting with simple forms is given by Ilya Prigogine.4 He describes how open systems, far from equilibrium, may utilize self-replicating mechanisms to form living structures in a manner that allows for biological evolution. While life, evolution, and intelligence are far from being explained by this in a complete manner, our understanding of possible mechanisms has advanced far enough for us to believe that a sufficient explanation is possible. We see from this example that in using the laws of denial to decide whether some complicated event is possible, we must be aware of the limitations of each law. We

must understand the kind of phenomena to which the law applies as well as the relevant boundary conditions. The laws of denial are usually described as applying to closed systems, but precisely what do we mean by a closed system? In the case of conservation of energy and the second law of thermodynamics, a closed system is one through whose boundaries no energy can pass. Conservation of momentum applies to systems through whose boundaries no force passes. When we use conservation of angular momentum, we refer to systems through whose boundaries no torque (or twist) is applied. In future sections, we will look at a number of systems that meet these requirements and see how the laws of denial apply to them.

2. Laws of permission: Limitations The function of the laws of permission is to predict how objects will move through space or how a system will change with time under the influence of known forces. These laws are known with great precision. We can calculate with exquisite exactitude how a single charged particle will move in a known electromagnetic field. We know just how a pendulum will oscillate in a known gravitational field. But simple problems such as these are in the minority. Most practical problems create grave difficulties when we try to find an exact solution. Consider a simple example. A satellite is orbiting the earth. You know its altitude and velocity at one particular instant. What will be the shape

of its orbit? Where will it be at some future instant? In any textbook on mechanics, you will find equations suitable to solve this standard two-body problem. Plug in the appropriate numbers, and you will obtain figures from which you can plot the orbit. Now compare this calculated orbit with observations on an actual satellite, and what do you find? You find, to your dismay, that the real satellite travels in a path quite different from the one you predicted. Somehow, the equations in the book do not apply to what is actually out there. Your predictions, in other words, are off the mark. After a little thought, you conclude that to make the equations more accurate, you should have included the sun

and moon as well as the earth and the satellite. Sometimes a prediction is simply not possible. Watch a human being walking down the street and try to predict its behavior for the next hour, or even for the next ten seconds. Something mysterious is going on inside that object, determining its motions in a way that is independent of outside forces and baffling any attempt to predict where it will go next. The same would be true if you were watching a robot programmed by a random number generator. From these examples it is clear that it is often impossiblein fact, it is usually impossibleto make precise predictions

of what is going to happen in any real situation. The plight of the weather forecaster emphasizes this truth. While great advances have been made in a number of areas (particularly in the speed and power of computers), the laws of permissionwhich tell us what things are going to doallow precise predictions in only a small minority of situations. Reasons for our inability to make precise predictions are many, even though the laws of physics would appear to give complete prescriptions for making such calculations. Here are some of the most important: 1. System complexity. A system does not have to be very complex to preclude exact

prediction of motions within it. In the problem of the satellite orbit, an exact analytical solution of the equations of motion can be found only for a system of two uniform bodiesfor example, a spherical satellite traveling around a spherical earth. (By analytical, we mean a solution of the equations of motion in the form of an exact mathematical expression that gives the position and velocity of the satellite as a function of time.) If we try to add a third body for example, the sun or the moonthen it is no longer possible to find an analytical solution (except in certain special situations). However, it is still possible to obtain numerical solutions to the problem by computer calculations that follow the satellite about in the earth's gravitational field and sum up all the tiny

changes of velocity and position that take place during each small time interval. Methods for performing such numerical integrations are well known, and the accuracy of the result depends only on the capacity and speed of the computer (and the amount of time available for doing the calculation). With these methods, it is not difficult to include the perturbations caused by the other planets in the solar system and the effects of irregularities in the shape of the earth. Nevertheless, it is necessary to keep in mind that the results of these computations are always only approximations to the truth. We can send a space vehicle to within a few thousand kilometers of the planet Neptune only because we periodically follow its actual motion and correct its path by the

application of rocket engines. We do not depend on hitting the target with just one initial push. With large, powerful computers, we can follow the paths of thousands of stellar bodies moving in their mutual gravitational fields, thus predicting the formation of galaxies as well as the events that take place when two galaxies approach one another. However, when the number of objects in the system increases into the millions and billions, the limitations of computers become manifest. There is simply not enough computer memory or time for such complex computations to be completed within an individual's lifetime. This is the difficulty encountered in all work dealing with the

behavior of large numbers of molecules in any kind of system, be it gaseous, liquid, or solid. Thus we are unable to simulate the earth's weather by following the paths of trillions of air molecules within the atmosphere, because our computers have too small a capacity to predict with any accuracy what these molecules will be doing for any length of time. There is, however, another approach to the problem. Mathematical methods of approximation have been developed to solve equations representing the motion of billions of molecules moving about in a fluid. After setting up a mathematical expression that represents the response of each molecule to the forces acting on it (the Boltzmann equation), we can sum the

motions of all the molecules in a small volume of space to find an equation representing their average behavior. From this procedure emerge the equations of fluid dynamics. Instead of tracking the motion of each molecule, the new equations have only a few variables, mainly the velocity and pressure of the fluid volume. The same kind of calculation can be performed for a collection of charged particles (such as electrons and ions) moving in electromagnetic fields. After averaging over the motions of numerous particles, we obtain the equations of plasma dynamics. These are examples of highlevel theories emerging from fundamental models.

The equations of fluid and plasma dynamics are extremely complex. Their solution is a formidable task. Usually it is necessary to resort to methods of mathematical approximation, in which small terms are omitted, leaving only the most important quantities in the equation. However, it turns out that in ignoring small terms we may be throwing out the baby with the bath water and getting results that bear no resemblance to reality. A simple classical example is the following. Take a closed container half filled with water. I would not lose any bets if I predicted that the water will always be found in the bottom half of the container. But suppose we initially put the water into the top half, making sure that

the bottom contains air under enough pressure to support the water. If we used the simplest equations of fluid mechanics to predict the behavior of this system, we would find that the water should remain suspended above the air. However, we know intuitively that this will not happen. What, then, is wrong with the equation? What is wrong is that the equation is oversimplified. It is a zero- order, or hydrostatic, approximation, containing only terms that describe a system in equilibriuma steady-state system that does not change in time. These equations do not tell us whether the equilibrium is stable or unstable. It is necessary to add some terms that describe how the system will change with time under the influence

of the gravitational force. When that is done, we find that the system is unstable: the slightest perturbation of the boundary surface between the water and air will very quickly grow in size, and soon the water will come crashing down. This phenomenon, called the Rayleigh-Taylor instability, is just one of the many instabilities that occur in fluid systems and make redictions of future behavior so imprecise. 2. Chaos. It is not necessary for a system to be complex to make predictions difficult or impossible. Two perfectly round and elastic balls completely isolated from everything in the universe except the earth form a system whose behavior, under certain conditions, cannot

be predicted. Imagine this experiment: Hold one of the perfect balls directly over the center of the other ball, fixed rigidly on a table. Now drop it. Can you predict where that ball will be ten seconds after you drop it? If the dropped ball were perfectly centered over the stationary one, it would bounce straight up and down, and there would be no problem. But if it is just the tiniest bit off-center, then the first bounce will magnify its displacement, and each succeeding bounce will amplify it even more, until after several bounces it will have bounced completely away from the stationary ball in an upredictable direction. You would think that in a perfect thoughtexperiment we could set the system up in

an absolutely perfect manner, so that the dropped ball would bounce straight up and down. But even in this ideal circumstance, we would find it impossible to keep the bouncing ball completely centered during its initial fall, because the Heisenberg uncertainty principle guarantees that fluctuations in the ball's position as it is dropped will cause it to land slightly off-center on its first bounce. As a result, it is guaranteed to bounce away from the stationary ball after about ten bounces, and there is no way of predicting which way it will go. In this situation everything is perfectly calculable. Nothing but Newton's laws of motion and the geometry of elastic collisions is involved. There are two

reasons for the unpredictability of the result: (1) the Heisenberg uncertainty makes it impossible to specify the initial position of the dropped sphere with perfect accuracy, and (2) each time the ball bounces, the angle of its trajectory increases. The smallest uncertainty in the initial position is rapidly amplified until it becomes a large uncertainty in the final path. This type of amplification is responsible for a general phenomenon known as chaos. Chaos results from the application of perfectly deterministic laws to systems in which the objects involved undergo rapid changes in motion. These motions may be either oscillatory or unidirectional. Chaotic behavior stems

from the fact that a very small change in the initial position or velocity may result in a very large change in the quantity being observed. Since it is impossible to be perfectly certain of the initial condition of the system, it becomes even more impossible to predict the final outcome. The flipping of a coin to obtain a head or tail is another example of an unpredictable result of a process that follows simple, exact laws of motion. We often and mistakenly think of the coin flip as a "random" process. However, it is a perfectly deterministic action, governed only by Newton's laws of motion and by gravitational force. If you flipped the coin precisely the same way every time, you would get the same result every time. But

if the coin is spinning rapidly, then the appearance of a head depends on the coin landing during just the right interval of time. A little bit earlier or later, and you get a tail instead. The randomness comes from the fact that your initial throw is not well controlled, which makes the final outcome appear to be the result of chance. 3. Quantum uncertainty. As we have noted several times, the fundamental nature of matter makes it impossible to know precisely where an object is or where it is going. A formal description of this fact is given by the Heisenberg uncertainty principle, which states that the uncertainty in the position of an object multiplied by the uncertainty in its momentum must be greater than a specific number (Planck's

constant). This means that the more precisely the position of an object is known, the less knowledge we have of its momentum (or velocity) and vice versa. Since Planck's constant is a very small number, quantum uncertainty is usually insignificant in problems concerning large objects, such as baseballs and planets, because the uncertainties in position and velocity are many orders of magnitude smaller than the sizes and velocities involved. However, quantum uncertainty does become important in the case of microscopic objects of molecular dimensions or less, or, as we saw in the previous paragraph, when amplification of the motion of a large object causes the quantum uncertainty of its initial position

to become significant. It is not just the imperfections of our measuring instruments or the way our instruments disturb the thing being measured that cause the uncertainty. It is the intrinsic nature of the object itself that makes the position and velocity uncertain to our perceptions and to our instruments. Even though quantum uncertainties are important mainly at the atomic level, there are occasions when they may produce effects of a completely unpredictable nature on the macroscopic level. For example, a cosmic ray particle may strike a DNA molecule within a human reproductive organ, causing a genetic change that will have a real effect on the next generation. Even though this could be

a permanent mutation that would change all future generations, it is completely impossible to predict which DNA molecule will be struck by the cosmic ray particle or exactly what changes will take place as a result of that collision. We have devoted considerable attention to the difficulty or impossibility of exact prediction because there are a number of misunderstandings prevalent in our society that obscure the function of prediction and explanation in science. A great many people believe, for example, that it is impossible to explain by scientific means alone the nature of human consciousness and volition. The reason they give is that, starting with basic physics, we cannot explain how the human nervous system

works and so cannot predict what a human being will do ten minutes from now. Therefore, they argue, it is necessary to invoke psychic forcesa "mind" separate from the nervous system, a "free will" independent of the causality of physical processesthat operate by rules separate from the laws of physics. Similarly, in trying to persuade nonscientists of the truth of evolution, we find ourselves unable to explain in microscopic detail how one species evolves into another. We cannot explain how a more complex species evolves out of a simpler species, ormost mysteriouslyhow living matter arises from nonliving matter. Even more difficult is to explain how beings with the

capability of thought and consciousness evolve from primitive forms of life. Thus a cohort of religious thinkers finds it necessary to invoke a supernatural guiding force in order to explain evolution from the simple to the sublime. The supernatural force is necessary, they claim, to explain the psychic difference between the human and the nonhuman. Two comments can be made in this connection: First, we have shown that even in perfectly deterministic systems, a number of causes produce effects that cannot be predicted with our best computers.Therefore the fact that we cannot make successful predictions does

not imply that supernatural intervention is the only explanation for the observed phenomena. Even though weather prediction is a chancy science, we are no longer so naive as to invoke the whims of Jove to explain the occurrence of a storm. Second, we have arrived at the point where, starting from the fundamental behavior of electrons and protons and from the equations of quantum theory, we can now make detailed predictions through computer modeling concerning the structure of reasonably complex molecules.5 Furthermore, we can make some predictions concerning the detailed behavior of these molecules before this behavior is observed experimentally. Indeed, some theoretical calculations have

proved previous experimental results to be wrong. With this technique, theory may, in some instances, be more exact than experiment. Computer modeling of molecules is limited only by the speed and capacity of available computers. In recent years, the availability of parallel-processing computers has greatly increased the speed of computation with no increase in cost. With this in mind, it would be foolhardy to predict how far we can go in the direction of understanding the dynamics of life systems using basic physical principles. It would be especially foolish to predict where the limitations lie. The science of life is very young and we have just begun to explore it. The use of nonphysical,

spiritual entities to explain life is a carryover from prescientific thinking. 3. Laws of denial: Precision It is fashionable in some circles to insist that "nothing is impossible," as though to admit the impossibility of some cherished goal is to "give up trying," to have a closed mind, to be a spoilsport, a pessimist. This clich is most prevalent in inspirational rhetoric connected with therapeutic, educational, or sporting activities. Nevertheless, one of the basic functions of science is to determine what actions are impossible in this real world. Choosing between the possible and the impossible is a task carried out by means of the laws of denial, which tie us firmly

to reality even as imaginations soar unfettered through the universe. Another fashionable clich is that "all scientific theories are provisional," as though physics knows nothing with a certainty, and that anything we think we know now is likely to be found false in the future. Kendrick Frazier overstates this view when he writes: " .. . It does serve to remind usas if scientists need itof the tentative nature of all scientific knowledge and the need for humility about all current understanding."6 If all scientific knowledge is tentative, what have we been doing for the past 300 years? How can I be so sure that the computer upon which I am typing will print out the words that I am putting into it?

A more accurate assessment of the situation is to recognize that one of the fundamental tasks of science is to critically examine all knowledge and to separate from the tentative ideas and false notions of the past facts that are so well established that to think them subject to change is to invite wishful thinking and foolishness. Is anything likely to change our knowledge that the earth is part of a sun-centered solar system that circulates about a spiral galaxy that in turn circulates about a larger cosmos? Is anything likely to change our knowledge that all matter is made up of atoms composed of electrons, protons, and neutrons? To say that all knowledge is tentative or provisional denotes a lack of confidence, a leaningover-backward to give the impression of

objectivity, tolerance, and openmindedness. This presumption of tentativeness stems from philosophical skepticism, which has traditionally held that absolute knowledge is impossible and that inquiry must be a process of doubting aimed at acquiring approximate or relative certainty. We have here the makings of a paradox. In this book we advocate skepticism, but total skepticism directed at scientific knowledge denies that physicists are able to make precise physical measurements whose results will not be falsified in the future. In particular, the laws of denial

defined in this chapter are experimentally so well founded that to call them "tentative" is to deny the entire structure of modern physics. Skepticism must be used intelligently to discriminate between scientific principles that have been verified beyond a shadow of reasonable doubt, theories that have partial verification, and theories, conjectures, or hypotheses that have very little foundation beyond wishful thinking. Thinking may be clarified by distinguishing between two types of skepticism: ideological skepticism and pragmatic skepticism. Ideological skepticism is disbelief based on deep-seated psychological factors. It

includes disbelief in religion because you hated your priest, minister, or rabbi as a child. It includes disbelief in conservation of energy and other laws of denial because you can't stand authority figures telling you what you cannot do. Ideological skepticism encourages you to think that we can't know anything for a certainty, and that, as a result, anything is possible. Pragmatic skepticism is disbelief in phenomena that contradict laws of nature that have been thoroughly verified by experiment and observation. It is based on a well-founded understanding of those natural laws, and of their uses and limitations.

One thing above all must be emphasized: that the validity of any law of nature rests on experimental verification. Scientists have had some cautionary experiences with laws that were assumed to be true because they seemed plausible but turned out not to be true in all situations. One of the most important events in 20th-century physics took place in 1956, when it was discovered that the law of conservation of parity is not always obeyed. Conservation of parity represents an important symmetry in quantum theorymirror reflection, or left- right symmetryand so was assumed to be true in all cases. However, in 1956 the physicists Tsung-Dao Lee and Chen Ning Yang realized that up until then there had been no experimental verification of conservation of parity for reactions

involving the weak nuclear force. Experiments suggested by Lee and Yang verified that the weak nuclear interaction indeed does not recognize left-right symmetry, so that parity is not conserved in reactions mediated by it. This experience alerted physicists to the necessity of testing each conservation law within the context of every type of interaction. It was no longer possible for them to make the kinds of statements that had formerly been fashionable, such as, for example, that a law had to be true "because nature likes symmetry." This means that when applying the laws of denial one must be aware of the physical domain for which each law has been verified experimentally. Once a law has

been verified for a given domain, we do not expect it to change. What are our reasons for expecting the laws of nature to be unchanging? One is our observation that the light coming from distant stars contains the same type of electromagnetic spectra (that is, the same pattern of wavelengths) as the light coming from our sun, indicating emission of light from the same kind of atoms obeying the same laws of nature. This light may have originated in stars thousands or millions of light-years away, proving that the laws of nature in existence many years ago are the same as those observed today. The past and the future are part of the same space-time continuum. There is nothing particularly

special about the present, except that that is where we happen to be. And since the laws have not changed in the past, there is no reason to expect them to change in the future. Therefore, once conservation of energy has been experimentally verified for the electromagnetic interaction, we do not expect it ever to be violated by any processes involving the electromagnetic interaction. The same statement can be made for the remainder of the four fundamental interactions. One caveat must be made: the Heisenberg uncertainty principle determines the margin of error on conservation of energy (as well as on conservation of momentum). This is due to the relationship between the uncertainty in the energy

measurement and the amount of time available for that measurement. This relationship has already been encountered in Section 4.4: E x T= 6.626 x 10-34 J-sec = 4.14 x 1015 eV-sec From this equation we see that if we have one second in which to make an energy measurement, then the uncertainty in the measurement is 6.6 x 10-34 joules, or 4.1x10-15 eV. Since the smallest transfer of energy encountered in atomic interactions is of the order of 1 eV, you can see that the uncertaintythe possibility of energy nonconservationis totally insignificant. On the other hand, if you only have 10-15 seconds in which to

make the measurement, then there can be an uncertainty of at least 4 eV in the measurement. This means that if a system absorbs some energy, but lasts only 10-15 seconds in this state, then the energy it emits can be 4 eV greater or less than the energy that was absorbed, resulting in a violation of conservation of energy to that extent. But this is not a situation we are likely to encounter in real life. Reactions taking place at an atomic or molecular level require times in excess of 10-12 seconds. For a reaction involving an energy transfer of 1 eV in 10-8 sec, the quantum uncertainty is less than one part per million. Since the experiments verifying conservaton of energy have a known

margin of error (one part in 1015 for the electromagnetic and strong nuclear interactions), we must ask ourselves whether we are allowed to say that violation of this law is absolutely impossible. A cautious approach would be to say that violations of the law in amounts greater than one part in 1015 with respect to the electromagnetic and strong nuclear interactions are impossible. Since this is a verbose and clumsy construction, I will henceforth simply say that violations of conservation of energy are forbidden, with the understanding that if violations of a magnitude less than the present limits of experimental error do exist, we have not been able to observe them.

Having established that convention, I am now in a position to give uncomplicated answers to complex questions. The tools I shall use in this process are the laws of denial. The law I shall use most often is the law of conservation of energy. My method will be simple: any time I am presented with a proposal for a new machine, phenomenon, or scientific theory, I will not need to go through elaborate calculations examining this proposal in all its minute details. Instead I will simply ask: Where is the energy? Where does it come from and where does it go? How much energy goes in and how much comes out? If the answer to these questions is that more energy comes out than goes in (or that there is any unaccountable change of energy), then I

need go no further. I stop right there until I get a detailed explanation. In this manner, all claims for perpetual motion machines are simply and immediately discarded. Let someone approach me with plans for a device of any form that puts out more energy than goes into it, and I can react in one of two ways: 1. Extreme pragmatism (or ultrahardnosed reaction): I say immediately, "The device is impossible because it violates conservation of energy. Therefore I won't listen any more." 2. Soft pragmatism: I say, carefully, "I am skeptical about this device because it

violates conservation of energy, but I am willing to be shown. It is up to you to prove that it really works, and the only way you can do that is to build a functioning machine." The U.S. Patent Office has traditionally taken the extreme pragmatic course, but was recently forced by a court decision to adopt the softer position in the case of Joseph Newman, a Texan battling to get a patent on an energy-generating machine. Mr. Newman does not deny that conservation of energy is a valid law. His fight with the U.S. Patent Office centers around his claim that his machine does not violate conservation of energy and therefore is not a perpetual motion machine. He says that his machine merely

releases energy stored within the device in some unspecified form and so does not differ from any other fuel-burning generator. The Patent Office has therefore been required to conduct a testing program to determine how the device actually works, with results indicating that no mysterious energy source exists and that the machine does no more work and puts forth no more energy than is supplied to it from its batteries. This example illustrates a characteristic of many current claims about so-called paranormal phenomena. Since almost everybody admits to the validity of conservation of energy, whenever something is supposed to take place that appears to violate this law, the

explanation is that the energy comes from a new and hitherto unknown source that provides, naturally, a new form of energy. When I am presented with this kind of story, my response is: if you claim the existence of a new form of energy, then you must prove its existence using the tools of science. Violations of energy conservation may appear in forms that are not immediately obvious, since a number of secondary laws of nature have conservation of energy hidden within them. One such law important in any discussion of information transmission is the inverse- square law, which says that the strength of any signal radiated through space must fall off inversely as the square of the distance

between the transmitter and the receiver. This law recognizes the fundamental fact that to send any kind of information through space, one must transmit energy to a receiver by some physical meansan acoustic wave, a stream of photons, or some other means. By "signal strength" is meant the amount of energy passing through each square meter of space per second: the transmitted power per unit area. Every kind of transmitted energy must spread out through space to a certain degree (with the exception of messages sent through solid media such as wires, light pipes, or wave guides). Light from a bulb spreads out equally in all directions. Even light from a laser, though it seems to

travel in a highly focused beam, does spread out gradually, so that after it has traveled far enough it, too, obeys the inverse- square law. (This is required by the Heisenberg uncertainty principle.) As the energy from any source spreads out, it travels through a greater and greater area. Think of the light emitted from an incandescent bulb into a vacuum. One meter away from the bulb, the light passes through a sphere whose surface area is 12.57 square meters. Two meters away, the light passes through a sphere whose area is four times that. But the total power (energy per unit time) passing through the second sphere must be exactly the same as that passing through the first, since by conservation of energy there is no way for

the amount of energy to increase or decrease. Thus the intensity of the light (power per square meter) passing through the second sphere is one-fourth that passing through the first. This proves that the light traveling away from the bulb follows the inverse-square law. Most important for our purpose is the fact that the crucial physics of the proof centers around the law of conservation of energy. Everything else is simple geometry. Of course, there is nothing to prevent a signal from falling off at a rate greater than that predicted by the inverse-square law. This will happen if the signal is absorbed or otherwise blocked. The bottom line is that any message transmitted through open space must get weaker as it

travels away from its source. Or, to put it in the form of a law of denial, it is impossible to send a message through space without the message getting weaker as it goes along, at a rate greater than or equal to that mandated by the inversesquare law. It is this logic that makes us skeptical of ESP experiments that ignore the effect of distance between the receiver and the source of information. Experimenters blithely send their transmitters as far as the moon, ignoring the constraints of both space and time (see Section 4.5). On the other hand, many parapsychologists recognize this flaw in their epistemology. They reply by appealing to theories that posit new phenomena that circumvent the

restrictions of geometry. However, if they expect people to believe in these new phenomena, then they must demonstrate their existence. Conversely, an unambiguous demonstration of ESP would help prove the existence of a new means of communication obeying laws totally different from those presently known. But no such demonstration has been made. In the remaining chapters of this book, we shall explore other applications of the laws of denial and see how their use can bring order and simplicity into a large number of scientific and pseudoscientific disputes. Notes

1. T. Rothman, "The Seven Arrows of Time," Discover, 8 (February 1987), p. 63. 2. E. F. Taylor and J. A. Wheeler, Spacetime Physics (New York: W. H. Freeman, 1963), p. 39. 3. H. M. Morris, The Scientific Case for Creation (San Diego: Creation- Life Publishers, 1977), p. 16. 4. I. Prigogine, Order Out of Chaos (New York: Bantam Books, 1984). 5. H. F. Schaefer III, "Methylene: A Paradigm for Computational Quantum Chemistry," Science, 231 (March 7, 1987), p. 1100.

6. K. Frazier, "The 'Whole Earth' Review of the Fringe," Skeptical Inquirer, 11 (Winter 1986-87), p. 198.

6. REDUCTIONISM, PREDICTION, AND THE LAWS OF DENIAL Reductionism is a philosophical position that may be defined in two ways: 1. The physical definition: Reductionism is the belief that all phenomena including those of lifeare reducible to the workings of fundamental particles interacting in response to the few forces that control their behavior. The elan vital hypothesized by Henri Bergson as a source of life and evolution does not exist. All of nature is the domain of physics. Of vitalism there is no need (nor mention in textbooks on molecular biology). Reductionism in this sense is acceptable to most scientists, and is the point of view

explicitly maintained in this book. 2. The philosophical definition of reductionism is theory reduction: the idea that the laws of chemistry are reducible to the laws of physics, while the laws of biology are reducible to the laws of chemistry and physics.' The application of reductionism in this sense is controversial, particularly in the fields of biology and psychology. Analysis of animal growth, behavior, and evolution requires the use of high-level abstractions such as information, meaning, awareness, and feeling, which appear offhand to be fundamentally different in nature from the concepts of matter and energy. Information is based on patterns: words

on paper, electrical pulses along a wire, electrochemical pulses along a nerve, patterns of amino acids in a DNA molecule. Our awareness of the meaning of an information pattern seems to require concepts that cannot be reduced to the physical properties of elementary particles. However, it is premature to make a final judgment on this subject. While we are nowhere near reducing the concepts of awareness and meaning to the molecular level, it would be presumptuous to put limits on how far we can go in this direction. The study has barely begun. Therefore, the introduction of vitalism at this stage of scientific development would be merely a superstitious expression of

our current ignorance. Just four decades have passed since Claude Shannon quantified the theory of information in 1949. The intervening time has seen not coincidentallythe development of the computer industry, with its emphasis on information processing, data analysis, pattern recognition, and artificial intelligence. All of these processes, formerly the exclusive domain of the animal mind, can now be performed to a greater or lesser extent by electronic circuits. It is not hard to see the relation between information processing and electronics. We have seen (Section 3.1) that a simple electronic circuit known as a coincidence

circuit (also known to computer engineers as an AND circuit) has two input ports and one output port. If two pulses are applied to this circuit simultaneously (one at port A and another at port B), a pulse issues from the output port. It is relatively simple to increase the number of input connectors to eight. In this case an output pulse appears if a pulse is applied simultaneously to each of the eight input ports. Going one step further, it is not difficult to arrange the internal circuitry so that an output pulse is obtained if and only if pulses are applied to input ports numbered 1, 2, 5, and 7, while no pulses are applied to ports 3, 4, 6, and 8. Now we have a circuit that responds to a particular sequence of pulses: it is a primitive pattern-recognition system. The

pattern is a pulse code: it can represent informationeither a number or an alphabetical characterand is just the kind of code used in digital computers. With this example, we have reduced the concept of information to the motion of electrons in a circuit. It is important to understand that the information referred to in information theory has no "meaning." Furthermore, awareness of this information is not required; the computer engaged in information processing is not conscious of the signals passing through its vitals and does not care about their meaning. Information theory simply deals with the mathematics of messages and answers questions such as, "How rapidly can

information be conveyed through a transmitter capable of handling a given range of frequencies?" Questions concerning meaning and awareness will be answered if and when we are able to build a computer or robot that is aware of its own existence. Much of the reluctance to adopt the concept of reductionism in biology and psychology stems from the difficulty or impossibility of making detailed predictions for biological systems starting with the fundamental laws of physics. At present, we are unable to predict "from scratch" the structure of the DNA molecule that might be formed from a given number of hydrogen, carbon, oxygen, and nitrogen atoms. We are also

unable to predict how a gene in a fertilized egg will determine the structure of a living being made up of billions of cells, let alone how intelligent this being will be and how it will behave when faced with an emergency. As we saw in the last chapter, there are a number of reasons why most predictions are highly difficult or even impossible in practice. What, then, is the good of reductionism? How do the laws of physics give us the power to make any statements about biology or psychology, about evolution or human behavior? The answer lies in the laws of denial. While we are unable to make good predictions about what things will do using the laws of permission, we can

make very precise predictions about what they cannot do. These negative predictions thus also enable us to make a large number of statements about the possibilities of human behavior. In the following pages I present a number of negative predictions and explain why I think the predictions are good. Some of them are fairly obvious and would be disputed by few, if any. Others are more subtle and will be denied by hordes of "true believers." But I will take my chances and look forward to being proved wrong, since all of these predictions deal with impossibilities of an intriguing nature: 1. I will never be able to jump as high as

the moon, at least not without mechanical aid. My muscles simply cannot supply enough energy, and no energy will come out of nowhere to propel me higher than a few meters, no matter how hard I try. 2. I will never suddenly burst into flames. (Sensational newspapers have recently been publishing stories about the spontaneous combustion of humans.) Ignition of any fire requires that the molecules of both combustible and oxidizer move fast enough to overcome the potential barrier between them, which is why hydrogen and oxygen can remain mixed in a container without mishap until combustion is started with a spark. Without this potential barrier, all exothermic reactions would immediately

and spontaneously take place, and the entire world would go up in smoke. Conservation of energy assures us that this will never happen; the molecules will never acquire the energy needed for ignition all by themselves. (There are special situations where spontaneous combustion does take place. In these cases, naturally occurring oxidations slowly raise the temperature of the reactants to the ignition point, where true combustion begins. Also, electric sparks generated by friction may ignite highly combustible materials.) 3. I will never suddenly levitate and rise up off the floor, no matter how hard I will it, contrary to the claims of some eastern cults. Not only does this action violate

conservation of energy, but conservation of momentum as well. An object at rest must interact with another object in order to start moving. Spontaneous motion of household objects is routinely claimed by believers in poltergeists. However, anyone who claims that poltergeists are the outside force responsible for the propulsion of crockery must first demonstrate that poltergeists exist. Scientists are not required to believe this until it has been demonstrated by means of physical evidence. All reliable investigations have shown that in any house reputed to be occupied by poltergeists, whatever goes flying through the air has been propelled by human hands.

4. No one will ever build a flying vehicle that is capable of hovering high in the air while supported by nothing but magnetic fields. This applies to inhabitants of other planets as well. UFO enthusiasts often claim that the flying saucers they "observe" are held suspended in the air and obtain their propulsion from a selfgenerated magnetic field. However, it is not possible for a vehicle to hover, speed up, or change direction solely by means of its own magnetic field. The proof of this lies in the fundamental principle of physics that nothing happens except through interactions between pairs of objects. A space vehicle may generate a powerful magnetic field, but in the absence of another magnetic field to push

against, it can neither move nor support itself in midair. The earth possesses a magnetic field, but it is weakabout one percent of that generated by a compass needle. For a UFO to be levitated by reacting against the earth's magnetic field, its own field would have to be so enormously strong that it could be detected by any magnetometer in the world. Similarly, it does not help to talk about the UFO's magnetic field reacting with the iron in the earth's core, since interactions between a magnet and a bare piece of iron are always in the form of an attraction. And finally, as the magnetic UFO traveled about the earth, it would induce electric currents in every power line within sight, blowing out circuit breakers and in general wreaking havoc. It

would not go unnoticed. These objections do not apply to rail levitation schemes, in which a train is supported and propelled by a magnetic interaction between the train and the rail, since here we are dealing with heights of a few inches at most. Through such a small separation, reasonably strong magnetic fields can exert very strong forces. This discussion exemplifies an important general principle. A given ideasuch as levitation of a UFO by a magnetic field may sound plausible at first, but when numbers are put into the known equations, the results can be disappointing. In fact, there is no force in nature that can keep

any kind of vehicle levitated and motionless at a high altitude (apart from helicopters, rockets, and satellites in synchronous orbits). Which brings us to the next negative prediction. 5. No one will ever build an antigravity machine, simply because no gravitational repulsion exists. Unlike the two kinds of electrical charge, there is only one kind of gravitational "charge," so that no matter how you arrange particles of matter, the gravitational force between them will always be an attraction. Therefore, there is no way to produce a gravitational shield or a machine that will nullify or reverse the effects of gravity. Electrical shields, on the other hand, are possible, because electric charge comes in both

positive and negative varieties, allowing for repulsions as well as attractions. But as there is only one kind of mass, there is also only one kind of gravitation. Antigravity is a concept originated in fiction and exists nowhere else. 6. No one will ever build a time-travel machine. Regardless of this concept's popularity in fiction and film, time travel violates the principle of causality. It allows a cause to come later in time than the effect, while the universe allows chains of events to go only in one direction from past to future. If you could travel to the past, you could warn people to prevent a catastrophe that you knew happened yesterday. Thus you would be interfering with events that have

already happened. One-way travel into the future would not violate this principle, but there is no behavior of matter that hints how this could be done. The idea of time travel was invented in science fiction during a period when time was thought of as a river that flows along from past to future. If you could only paddle a little faster you could travel into the future. But we no longer think of time so naively. It is not a dimension which allows you to skip over events yet to come. As matter moves along, one particle interacts with another, one object interacts with another. There is a continuity of interactions from past to future. There are no shortcuts. Yet the idea of time travel lives on with a life of its own.

7. No one will ever make a killing on the stock market by foreseeing the future. The objection to this scheme is the same as that for time travel. To see into the future (precognition) requires that information be sent from the future into the present. But transmitting a signal at a time later than the time of reception violates causality. There is no interaction that goes backward in time. There are those who claim to predict the stock market by precognition, but in a rising market you can make money by throwing darts at a board, so claims of this nature have no more authenticity than the streetcorner gypsy fortuneteller. 8. Nobody will ever send a message through space that does not diminish in intensity as it travels away from the

sender. The reason is the unverse-square law, a corollary of conservation of energy, as described in Section 5.3. This prediction applies to all proposed forms of telepathy. It does not help to appeal to fields and forces as yet unknown, because for any message to be received and brought to consciousness within the brain requires the activation of a number of neural electrons to start a signal going in the nervous system and that takes energy. How much energy does it take? Let us imagine the smallest amount of energy capable of activating a cell in the nervous system. For example, the minimum amount of energy that could possibly activate a receptor in the retina of the eye would be

the energy in a single photon of green light about 2.4 electron-volts. (The eye is most sensitive to green light.) However, one photon does not an image or a message make. Let us say we are trying to visualize an image of 100 by 100 points. This would require 10,000 photons. Say to ensure continuous vision there are a hundred of these images per second. The total power required is then about 4 x 1013 watts. Say this power is being radiated into the brain, covering an area of 10 cm2. The power per unit area is thus about 4 x 10-10 watts per square meter. This is a very small power density. But suppose the source is one kilometer away, radiating in all directions. The power of the source must then be about 2 milliwatts. At 10 km

distance, the source must radiate 0.2 watts, and at 100 km the source power would have to be 20 watts! Any farther, and the transmitting brain would light up like an incandescent bulb. Actually, the human body consumes about 100 watts altogether (2000 kilocalories over a 24hour period), so that to imagine the brain radiating watts of power in any manner is to get far afield into fantasy. For these reasons, no one could possibly send telepathic messages for indefinite distances without running out of power. 9. No one will ever send or receive any kind of message that travels faster than the speed of light. In particular, no one will ever send any kind of message instantaneously from one place to another.

This prediction applies to any kind of observable energy, as well as to any kind of extrasensory perception (ESP), be it telepathy or clairvoyance. It also applies to physical travel faster than light (FTL). While FTL travel and message transmission are among the chief underpinnings of science fiction, they are strictly forbidden by the principle of relativity, in conjunction with the causality principle. Many relativity textbooks detail the proof of this statement; an outline of one proof is given in the Appendix. We have shown (Section 3.3) that the new quantum effects connected with the EPR experiment and Bell's theorem still do not give promise of FTL communication, regardless of the claims of many writers,

and even though the fundamental nature of matter remains swathed in mystery. That FTL travel or communication is forbidden is hard to accept, since those of us who were raised on science fiction became accustomed to thinking of travel to the distant stars as part of our birthright. It is a powerful dream. Giving up any dream is sad, and there are many who refuse to give up this one. But as with many other fantasies, faster-than-light travel is and always was a pseudoscientific concept. It was never part of science. 10. No one will ever influence the position or motion of any kind of physical object from a distance (telekinesis or psychokinesis) just by thinking about it.

This is a delicate subject, for a number of legitimate scientists have become involved in experimentation in this area. One of these is Robert Jahn, dean of engineering and applied sciences at Princeton University. These scientists have created laboratories with the aim of monitoring electronically the influence of thought on the behavior of random number generators or on the motion of single photons or other particles. There is no theory explaining how such an influence might be exerted, and although this work has been going on for a number of years, it is not clear that observed results have shown more than the usual statistical fluctuations that are to be expected in any research where the results consist of tiny variations in a large number of

measurements. I predict that no unequivocal positive results will appear in the future. Telekinesis requires the transmission of energy and momentum from the brain to the object being moved. The brain is incapable of transmitting sufficient electromagnetic energy to affect distant particles. If the aim of this research is to demonstrate the existence of a new kind of energy, then we ask why the human brain should manifest a kind of energy unknown to other mechanisms. On reading what these workers have to say, it is clear that their purpose is to establish vitalism as a scientific reality. The kind of energy they have in mind, in other words, is psychic. So far their experiments have not demonstrated the kind of regularity and repeatability required to increase the

amount of knowledge in their field. A knowledge of the fundamental laws defines the physicist, but even with this knowledge, too much love for a pet theory can lead a physicist into error. In particular, the determination to search for physical justification of spiritual belief can seriously compromise a scientist, so that physical scientists entering into parapsychology research are advised to proceed with extreme caution. 11. No one will ever demonstrate that astrology really works. Astrologers base their claims on a number of causes: the force of gravity from the sun, moon, and planets (gravity causes tides, doesn't it, so why can't it influence the fluids in the

brain?), vague psychic forces or, if they are smart, they don't try to explain it at all. For physical explanations can be ruled out immediately. Gravity doesn't come close to working when you put numbers into the equations. First, gravity is an incredibly weak force. Second, the primary or firstorder gravitational force affects every atom in the body equally and thus could make no change in the mind or in a person's behavior. The second-order or tidal force does affect the top of the body somewhat differently from the bottom, but how big is its effect? The tidal effect when the moon is overhead is equal in magnitude to the difference in gravitational force you would experience if you stood on a chair. Tidal forces from the sun and moon are smaller than the

normal tidal forces we experience when we perform ordinary activities such as climbing stairs or mountains. Finally, our behavior is not controlled by the sloshing of fluids inside our brains, but by electrochemical pulses passing through our nervous systems. Finding a mechanism to explain astrology by natural causes is like trying to explain how Jove could throw lightning bolts. Astrology is simply not a part of natural science and so is not explainable in scentific terms. Furthermore, since no proper study has demonstrated the validity of astrological prediction, nothing requires explanation. In this survey of negative predictions I

have taken a very hard skeptical approach. All my statements have been based on what we know about particles, interactions, and laws. The instant response from believers will be: "How do you know there is not a new kind of force, as yet unobserved, that could explain ESP, telekinesis, astrology, and so on?" Clearly the purpose of parapsychology research is to discover evidence of just such a new kind of force. One reply is that, although research into paranormal phenomena has been going on for well over a century, the amount of knowledge in that domain has not increased during that time.2 By knowledge we mean a body of experimental results that can be replicated by anybody possessing the right equipment, together with a theoretical or

conceptual framework that explains or ties together the observations. In parapsychology research, the equipment becomes more sophisticated as time goes on, but with no increase in the accumulation of knowledge. Research in physics, on the other hand, has accumulated knowledge at an exponential rate for 400 years, and no evidence for new forces (outside the four forces of the standard model) has been found. On the contrary, the more scientific knowledge accumulates, the more we are able to explain the human body and mind without appealing to vitalism or to such concepts as psychic energy. As scientific knowledge expands, the room for spiritual explanations shrinks. Indeed, the

abandonment of supernatural explanations during the past century is what has made our current explosion of knowledge in biology and neurophysiology possible. Notes 1. K. S. Thomson, "Reductionism and Other -isms in Biology," American Scientist, 72 (July-August 1984), p. 388. 2. Ray Hyman, "A Critical Historical Overview of Parapsychology," in A Skeptic's Handbook of P arapsychology, P. Kurtz, ed. (Buffalo, N.Y.: Prometheus Books, 1985), pp. 4-8.

7. VERIFICATION AND FALSIFICATION 1. Existence proofs Hordes of complaints about my predictions are sure to be registered. My skeptical judgments of paranormal phenomena (ESP, telekinesis, etc.) are based not only on the laws of denial, but also on the denial of new and yet unknown forces, interactions, and energies. I hear believers singing, "There are more things on heaven and earth, Horatio, than are dreamt of in your philosophy." Therefore there is more to be said. Consider the important assumption made

in the last chapter: that any energy emitted from the brain must travel outward equally in all directions, for the brain has no directional antennae to focus the radiation onto one particular receiver. "How do you know that?" asks the believer. "Because," I say, "a highly directional antenna must have a very uniform structure whose dimensions are much greater than the wavelength being transmitted. This is a rule that applies to any kind of wave. To determine how big such an antenna must be, we must first estimate the shortest wavelength that might be produced by the body. To begin with, there are no electrochemical processes in the body

capable of producing radiation whose wavelength is less than that of visible light, because the natural body processes cannot generate the necessary photon energy. (In fact, the normal human, lacking the chemistry of a firefly, does not even emit wavelengths as short as those of visible light.) So let us take 600 nanometers to be the smallest wavelength that could be expected. Now let's imagine a directional antenna which can focus all its radiation onto a spot 10 centimeters in diameter at a distance of 100 meters. (This is quite generous, since many parapsychology experiments use greater distances.) The Heisenberg uncertainty principle tells us that this antenna must be on the order of 1 millimeter in diameter. This would easily be visible with the help

of a magnifying glass. Have you seen anything that looks like an antenna in the brain?" "Well, how do you know the same relationship between energy and wavelength applies to our new kind of radiation?" the believer asks. "How do you know the same antennae formulas apply? How do you know we're talking about a wave at all? After all, we're talking about something completely new." "Okay, then you're talking about something so new that it is out of the domain of science altogether. If you are postulating a wave that does not obey the fundamental relationships between photon energy and wavelength, between wavelength and

antenna size, then you have left the known universe. Because we don't know of any kind of waves that behave the way you want them to behave. If you are talking about something that's not even a wave, then you are indeed traveling beyond the border of reality into a world of fantasy." "That doesn't prove these new energies don't exist." That's supposed to be the clincher. If you can't prove that something doesn't exist, then you are supposed to believe in itor at least admit its possibility. An open mind is the greatest virtue. The foolishness of this position is illustrated by the following story. I

recently gave a lecture concerning a visit to a house purportedly possessed by demons, in which I expressed disbelief in the existence of the alleged demons, and gave general reasons for the desirability of a skeptical attitude towards paranormal phenomena. A lady in the audience asked me, "Does that mean you don't believe in the afterlife?" "That's right," I said. "Then how do you explain the resurrection of Christ?" she asked. That was supposed to floor me. "I don't try to," I replied. Then, after a

pause, I said, "I'm not required to," and a startled laugh erupted as the audience realized that a non-Christian is not required to explain allegations which are embedded exclusively in the Christian belief system. Similarly, scientists are not required to explain or believe in phenomena that lie outside of the knowledge system encompassed by science and for which there is no good evidence. It is easy enough to say, using the standard and highly popular rhetorical form: "You can't rule out the possibility that there are giant centipedes on the third planet of Aldebaran." Certainly, I can't rule out that possibility. I don't know that those

centipedes don't exist. But by the same token I don't know that they do exist. (I don't even know that there are three planets going around Aldebaran.) The possibility of extraterrestrial centipedes does not ensure their existence. I have no evidence for their existence, so why should I believe in them? If we allow an argument that assumes the existence of objects in the absence of empirical evidence either for or against their existence, then we can prove anything. All of which leads to an important generalization: Before we can talk about the existence of a physical entity, we must have an existence proofsome reason to believe that the entity exists. There is no reason to believe in a phenomenon for

which no physical evidence exists. The burden of proof is on those making the claim of existence. The concept of existence proof is well known to mathematicians. Physicists use the idea intuitively, although it is not stated in a formal manner. For the existence of a new phenomenon or entity to be provable, it must have properties that allow it to be detected physically, preferably by suitable instruments. Instruments are more desirable than unaided human senses, because the eye can be fooled and hallucinations are not unknown. Any external phenomenon perceptible to human senses can be recorded by cameras, tape recorders, or other electronic sensors. (Internal

phenomena such as feelings are not yet available for instrumental detection, but the possibility of this eventuality should not be dismissed.) None of this discussion should be taken as denying the usefulness of theorizing about hypothetical objects in order to predict how they would behave if they existed. Much work in particle physics is of this nature. Physicists theorized about neutrinos and quarks for years before enough evidence had accumulated to allow a firm belief in their existence. Theories about hypothetical objects are considered provisional and are important in devising experiments to observe the new phenomena if they do exist.

In designing such experiments, scientists must carefully apply the principles of physics. I would seriously doubt the existence of something that can be seen by the human eye but not by a camera (or whose image cannot be reflected by a mirror, the extensive vampire literature notwithstanding). I would be even more skeptical of something that can be seen only by one person. Despite all the numerous allegations of demons and poltergeists in recent history, nobody has produced concrete evidence in the form of a photograph or video tape, outside the cinema of imagination. The history of science is a graveyard of entities that were postulated and believed for long periods of time but finally fell by

the wayside, because they simply could not be detected and also were found to be unnecessary to explain observed phenomena. Phlogiston, caloric, and the ether are the prize exhibits of this sort of entity. In biology, elan vital is an undetected phenomenon whose utility continues to shrink as areas of ignorance diminish. In psychology, Jungians may still believe in psychic energy, but no one else does, for no experiment has provided physical evidence of its existence. What is even more damaging to believers is that the more we learn about the mechanism of the nervous system, the less we have need of this hypothesis. In order for an entity to be detected, there must be a means for it to interact with the

rest of the universe and thereby produce a physical effect. If, for example, someone claims the existence of a new kind of force field to account for ESP, then this new field must have a way of interacting with the nervous system so that information may be transmitted to the brain. In other words, a mechanism must exist to explain how the new force works, based on physical interactions and laws. Researchers in the field of parapsychology have long invoked a variety of forces to explain transmission of information from one mind to another: psychic energy, synchronicity, quantum connectivity, neutrinos, and some elusive thing they call "vibrations." (Vibrations of what, they don't say.) It is as though every

time something new is learned in physics parapsychologists jump on the bandwagon and hypothesize that the new entity in question is the carrier of thought. But things are not so simple. Even assuming the existence of an unknown field forming a connection between minds, as long as it is steady and unchanging it cannot transmit any information. For this, the field has to change; that is, the field energy must be modulated. A steady, onenote sound conveys only one piece of information. It might mean something to you if you knew ahead of time that the presence of the tone meant something specific. But it could not mean more than one thing. The conveyance of more complex information than that requires

change in the tone. The more changes possible, the more different bits of information can be transmitted. Many kinds of modulation are available: amplitude, frequency, pulse, and so on. But everything that is to carry a message must undergo change. One interesting feature of waves sometimes causes confusion for those eager to believe in faster-than-light transmission of information. In a number of situations, a simple unmodulated wave may actually travel faster than the speed of light in a particuliar medium. The most common examples are electromagnetic waves in a wave guide and certain types of plasma waves in an ionized gas.' The velocity of an unmodulated wave is called

the phase velocity, and no law prohibits this phase velocity from being greater than the speed of light, since the unmodulated wave is incapable of carrying a message. As soon as a message is sent, however, the wave must be modulated. The velocity of the resulting modulationthe group velocityis always less than the speed of light. As soon as we are aware that a concrete mechanism must underlie every observable effect, we see that it is not enough merely to postulate a new and unknown field to explain the transmission of information from one mind to another. It must also be shown how the field becomes modulated by the sender, how this modulation passes through space, and

how it causes physical changes in specific organs of the receiving mind. Explanations of this nature are of the utmost rarity in parapsychological literature. The concept of existence proof may also be used to clarify arguments in the evolution-creationism controversy. Evolutionists base their position on known physical laws that have been validated in the laboratory and in the field. The theory of evolution utilizes entities whose existence has in each case been validated by observation: fossils whose age can be determined by radionuclide dating DNA molecules of known structure with

known means of transmitting characteristics from one generation to the next known examples of evolution, such as the rather complete fossil record detailing the development of the horse from the Hydracotherium of 54 million years ago to the modern Equus current examples of ongoing evolution such as the increasing resistance of certain bacteria to penicillin. Can creationism also claim to be based on entities whose existence has been proved? Henry M. Morris, one of the leaders in the field of creationism, defines the creation model in a book called The Scientific

Case for Creation. His definition states: "The creation model . . . defines a period of special creation in the beginning, during which the basic systems of nature were brought into existence in completed, functioning form right from the start. Since 'natural' processes do not accomplish such things at present, these creative processes must have been 'supernatural' processes, requiring an omnipotent, transcendent Creator for their implementation. Once the Creator (whoever He may be) had completed the work of creation, the creating processes were terminated and replaced with conserving processes, to maintain the world and to enable it to accomplish its purpose."2 The creationist definition explicitly

includes two entities or phenomena that have never been observed or verified by any physical means. These are: "supernatural processes" and an "omnipotent, transcendent Creator." (The use of the pronoun "He" to describe the creator gives away the religious underpinning of this theory and belies the term "scientific" used in the title of the book. It also belies recent legal efforts to claim that creationism is a science rather than a religion.) In addition to there being no existence proof, there is no explanation of how a supernatural process operates that is, there is no mechanism. There is no description of any physical interaction from the creator to the things being created. There is no answer to the question of where the creator came from

and how He learned all the things He needed to know to create the universe. Use of the word "supernatural" is supposed to answer all questions, but such a broad explanation that purports to explain everything with one hypothetical concept is intrinsically empty. In fact, it is no explanation at all. It is not even an empirical statement. (See Section 7.2.) Mathematicians use the concept of "necessary and sufficient conditions" in proofs of mathematical theorems. Use of the analogous concept of "necessary and sufficient explanations" can aid in clarifying the logical status of creationism. A necessary explanation is the only explanation that will account for all the observed facts. A sufficient explanation is

one that explains all the observed facts by deduction from fundamental principles that have been verified by independent experiments. Whenever we find that two competing theories explain all the facts equally well, the two theories are generally found to be not only equivalent in explanatory power, but logically and mathematically equivalent as well. To apply this logic to the evolutioncreationism controversy, we must first ask: What needs to be explained? What is observed is the variety of organisms now living on the earth, together with the fossil record of organisms that existed in the past. Our task is to explain the development of the existing organisms from their beginning to their present state,

as well as their relationship to past creatures seen in the fossil record. Within the context of science, for any such explanation to be sufficient in the sense mentioned above, the entities appealed to must be known to exist: there can be no recourse to "supernatural phenomena." Is creationism a necessary explanation? No, because the alternate theory of evolutionthe only model accepted by professional scientistsgives a better explanation of the observations. Is creationism a sufficient explanation? No, because it is based on unobserved entities, namely, supernatural forces. It explains neither the nature nor the origin of these supernatural forces, nor the

mechanism by which they interacted with matter in the past, nor why they no longer operate in the present. It is therefore not sufficient to explain the observed facts. Consequently, creationism, as an explanation of the world, is neither necessary nor sufficient. The same tests can be applied to evolution. Is evolution a necessary explanation? Yes, because it is the only theory available to explain the observed facts. Creationism, being not sufficient, is not a competing theory. Is evolution a sufficient explanation? Here we must confront the fact that evolutionary theory does not as yet explain all the observations. Also, it is true that scientists

disagree over many details of mechanism within the general evolutionary model. However, no professional scientist disagrees that evolution, as a general model, is the only explanation of the development of life and its forms that fits within the domain of science. Evolution is therefore necessary and at least partially sufficient. Note that we do not oppose the concept of supernatural forces simply because they are supernatural. We oppose them because they have no explanatory abilities, and because they have not been observed. If you try to explain the unique characteristics of human beings by saying that "humans were created by supernatural forces," then you are merely stating an

empty generality. You are not telling us anything specific about these supernatural forces. In order for us to believe you, we must have some reason for doing so. Therefore you must demonstrate that what you claim actually happened, and also the means by which it happened. An objection to this type of logic might involve the question, "How do you account for the miracles that have happened over the centuries and for which individuals have been elevated to sainthood?" Again, we apply the criteria of necessity and sufficiency to these events, using the same logic we would use for any contemporary case of "faith healing." Assuming that something out of the ordinary did happenusually

somebody being cured of a disease through apparently supernatural forces we first ask: Is it necessary to invoke the supernatural to explain the observed? Are other explanations possible? Was the disease properly diagnosed? Could the immune system have become sufficiently activated by normal physical means to cause the observed remission? If not, we ask, is the supernatural explanation sufficient? With what kind of physical interaction did the supernatural force perform its deed? Where did this force originate? Furthermore, if supernatural forces acted more often in the past than they do now, what caused them to change their frequency of operation? Are they in fact still in operation?

I am, no doubt naively, asking the same questions of supernatural explanations that I do of natural ones. Perhaps this is against the rules, but if I don't understand how the supernatural operates, how can I explain anything with it? Without rules it is possible to explain anything by invoking the supernatural, as a result of which nothing is explained. And this would seem to be just what the invokers of the supernatural want. Rational explanation is not desired. They want belief. The very concept of rational explanation is one that arouses controversy, for the word "rational" means different things to different people. Some creationists find belief in evolution to be irrational. "How can it be rational," they ask, "to believe

that we all evolved by chance, with no guiding spirit?" To them it is irrational to believe in something they find inconceivable. This type of logic is precisely opposite to that of the scientist, who believes that rational thinking starts with physical evidence, and that even though an explanation may be long, difficult, and perhaps hard to believe, that alone does not make it irrational. Ultimately, it boils down to a battle between competing types of skepticism. Scientists are skeptical of any theory that posits super natural entities. For their part, enthusiasts of the paranormal are skeptical of theories that posit only natural entities. Perhaps the battle will not be completely decided until scientists learn enough to

offer sufficient explanations of everything that exists. If and when they can make living organisms from scratch, when they can make robots that really think, then elan vital and psychic energy will become unnecessary constructs. 2. Falsification In 1959 the philosopher Karl Popper, while discussing problems of scientific induction, stated: "It must be possible for an empirical scientific system to be refuted by experience."3 That is, an empirical scientific system must be falsifiable. The statement, "Either it will rain here tomorrow or it will not rain here tomorrow" is, by this criterion, not empirical, since it cannot be falsified. It

is, in fact, a tautologya statement designed to be always true and therefore not subject to falsification. A statement such as "God's will determines everything that happens in the universe" is also unfalsifiable, since any experiment designed to test it is also a happening in the universe and therefore determined by "God's will" regardless of the outcome of the test. In other words, whether the test gives a positive or negative result, it is all God's will. The statement cannot be proved false and therefore cannot be regarded as part of an empirical scientific system. Popper's concept of falsifiability has caused a great deal of confusion, partly

because it does not say everything there is to say about verification of theories, and partly because a number of people have distorted its meaning, using it in ways that defy common sense. A number of pseudoscientists particularly creationistshave wrapped themselves in the cloak of falsifiability, making the claim that since their theories are falsifiable in principle, they are scientifically viable and therefore deserve to be taught in schools on the same footing as the evolution theories supported by professional scientists. The creationists' idea seems to be that falsifiability is the only thing a theory needs to make it valid. Even if that were so, the fact is that the theory of creationismthe idea that the

universe was created in the recent past by supernatural forcesis not falsifiable. You can always account for dinosaur fossils by saying that God put them there before the Flood. You can always claim that radionuclide dating does not work because supernatural forces changed the rates of radioactive decay a few thousand years ago. If mutations producing new species are observed in the future, creationists can always claim that these mutations resulted from the guidance of a higher intelligence rather than from the workings of physical law. These statements cannot be tested and proved false, because they are ad hoc inventions designed to explain away everything that has ever been or will ever be observed. Creationism is therefore not an empirical

scientific system. Falsifiability, by itself, does not make a theory valid. It merely makes it empirical, or testable. For a physical theory to be valid a number of criteria must be met: 1. The physical objects or entities that are the subject of the theory must be shown to exist. This is true in all the physical sciences. Physics is based on the observed existence of electrons, protons, and other particles. Evolution is based on the existence of life forms that preceded present species in time and whose remains are observed in fossils. Supernatural forces, on the other hand, have not been observed in a physical sense.

2. The objects observed must interact with each other and with the rest of the world according to well-defined rules, and all such interactions must be associated with particular mechanisms. For example: The force between electrons and protons is electromagnetic in nature, is described by a specific mathematical expression, and (in quantum theory) results from the exchange of photons between pairs of particles. Similiarly, extinction of the dinosaurs was caused by specific physical events such as a cooling of the earth's surface, which in turn were caused by mechanisms whose precise nature is still uncertain. 3. Once the rules governing interactions have been determined by observation, they

may not be changed on an ad hoc basis simply for the purpose of asserting or denying a theory. There must be direct evidence that the rules have changed. For example, in order to "prove" that radionuclide dating is incorrect and that nothing on earth is more than several thousand years old, creationists claim that rates of radioactive decay have changed during the recent past. This is circular reasoning. We have no reason to believe that the rules have changed, since there is no evidence that the laws governing nuclear reactions have not been constant since the beginning of the universe. 4. The theory must be capable of making specific predictions (or, in the case of evolution, retrodictions) that are

falsifiable. For example, the theory of relativity predicts that the speed of light (in free space) is a constant, and prescribes specific tests that may prove this prediction false. The theory of evolution retrodicts that human beings did not live at the same time as dinosaurs, and therefore predicts that no fossils with contemporaneous human and dinosaur remains will ever be found. 5. The tests for falsifiability must actually have taken place, since otherwise the theory has not been tested and is thus not necessarily valid. Attempts to falsify the constancy of the speed of light have been made repeatedly and have always failed. In this sense, the constancy of the speed of light has been verified. A failure of

falsification amounts to a verification. 6. Tests for falsifiability are not the only possible kinds of tests. Dirac's relativistic quantum theory predicted the existence of a particle with the same mass as an electron but with a positive electric charge. This particle (the positron) was subsequently found to exist, and its behavior was found to agree with that predicted by the theory. This was a positive prediction and a positive test. Similarly, the theory of relativity predicts that the planet Mercury will travel about the sun in a certain precisely calculable orbit. Other, competing theories of gravitation predict slightly different orbits. Measuring the exact motion of the planet Mercury is thus a positive test to

decide between the competing theories. 7. When two theories compete, each should predict measurable consequences that differ in at least one respect. If the predictions do not differ, there is no way of deciding between the two theories. Only by testing such differences can one actually decide between them. Creationism is designed so that everything observed in the world follows from its premises; it merely explains them differently. Therefore (according to the creationists), it is just as good a science as the theory of evolution and so should be taught on an equal footing in public schools. However, as we saw above, creationism is not empirical. It explains by fiat all the things that have ever been

observed, employing mechanisms that have never been observed. In order to become more than an empty hypothesis, creationism would have to predict something that has not yet been observed. The prediction would have to be a deduction from general principles, as are the deductions of evolution. It would also have to differ from one or more of the predictions of evolution. Then, if the creationist prediction was verified by physical observation, it could be taken seriously, but certainly not until then. 8. In short, there should be at least one good reason for believing in a theory's validity other then simple faith, tradition, or a liking for its sound. Theories require verification as well as falsification.

Popper's emphasis on falsification originated from his concern with the central question of scientific induction: how can we justify our belief in a universal generalization that applies to all cases when we can test that generalization only for a finite number of cases? We have already asked this question for conservation of energy. How, that is, can we believe energy is conserved everywhere and under all conditions, when we can make tests only in a few places under a few conditions? How many tests are required before we may believe the law completely? Not believing that induction can be applied rigorously to empirical laws, Popper supported a hypothetico-deductive

method, which begins with preliminary observations of a natural phenomenon. If these observations give some evidence of a regularity, we begin to build a conjecture or hypothesis. For example: Observing a small number of perpetual motion machines, we find that none of them work and so hypothesize that energy cannot be created. We make further attempts to create energy, and after sufficient failures become convinced that the law of conservation of energy applies in all cases. The natural question is: When have you done enough experiments to be rigorously sure that conservation of energy is always true? How do you know that the next experiment with a new and improved

machine will not show the law to be false? The classical answer has been: You can never be sure. This attitude leads to the feeling that all science is tentative and provisional. Why, then, does the U.S. Patent Office refuse to look at perpetual motion machines? The particle model of matter provides a totally new way of thinking about these questions. Recall the experiment described in Section 4.4, which verifies conservation of energy by using the resonant absorption of gamma ray photons to show that the photons emitted by the radioactive source have exactly the same energy as the photons absorbed by the resonant absorber. You might think that this is just one experiment and therefore

just one piece of data that verifies the conservation law. However, on an atomic level, every individual emission and absorption represents an experiment. In an average run of data collection, an enormous number of photons are counted. The constancy of the energy of each photon contributes one bit of data to the verification of the law. To get an idea of the numbers, suppose we run one of these experiments for an hour, using a one-millicurie radioactive source, and assuming that 0.1% of these photons are counted. (Most will go off in all directions, rather than through the absorber.) After an hour's run, 100 million photons have been counted. This means that 100 million separate experiments

have taken place in the one hour, all of them showing the photon energy to be constant (within one part out of 1015). What is the probability that photons observed in other experiments will behave differentlywill change their energy while flying from one place to another? One of the fundamental properties of photons is that all photons of the same energy are completely identical so identical that you cannot identify different photons in a crowd. That is, you cannot put a label on photons so as to talk about photon A and photon B. They are completely indistinguishable. This property of photons is not just a hypothesisit leads to definite observable behaviors. It is responsible,

for example, for the fact that lasers work the way they do. Identity of photons thus leads to definite, positive, verifiable predictions. Exchange of photons between electrons and protons is the fundamental mechanism behind the electromagnetic interaction. The electromagnetic interaction, in turn, is responsible for the functioning of all mechanical devices, electronics, biological systems, chemistry, hydrodynamics, and the human mind. (Even the mechanical force exerted by a lever or gear can be reduced to the electromagnetic interaction between the atoms of the solid materials.) Thus, the millions of microscopic experiments performed on photons allow us to say with

great certainty that the total energy of any closed system is not changed by the action of any mechanical, thermal, electrical, or biological apparatus. From this certainty we reach the many conclusions already discussed in Section 5.3. The atomic model has this given us a way of dealing with inductive inference that leads to greater confidence in knowledge than we could ever have possessed in previous centuries. With this model, it is not necessary to do an infinite number of experiments to show that conservation of energy holds true for all actions mediated by the four fundamental interactions. Furthermore, the realization that one macroscopic experiment consists of millions of microscopic experiments

improves the statistics and narrows the uncertainties. The idea of falsification has a psychological importance that transcends the physical and logical. First, a number of important discoveries in science have been forced on the reluctant consciousness of the skeptical scientific community by failures of falsification. Doubters of relativity have been won over by the failure of numerous experiments to show that the measured speed of light depends in any way on the motion of either the source or the detector. Second, falsification plays an important role in the methodology of research, particularly in situations where one is looking for phenomena that are statistical in nature

and are difficult to observe. Testing a theory by trying to falsify it is safer than starting out with the thought of verifying it. Consider the testing of a new drug or a new theory of psychotherapy on a group of patients. What one tries to observe is a small increase in cure rate within a background of negative results in the sample population. Trying too hard to verify the theory may lead the experimenter into error. It was noted early in this century that if a doctor gives a drug to a patient in the strong belief that the drug will work, the patient tends to notice (either consciously or unconsciously) this air of optimism and consequently reports improvement even when the drug has no real biological effect. For this reason, the

double-blind method was developed, in which neither the patient nor the person delivering the medication knows whether the patient is receiving the experimental drug or a placebo, thus eliminating the attitude of the experimenter from the experiment. These principles are especially important in the area of parapsychology research. The nature of the enterprise practically guarantees that workers in the field will believe strongly in their theories. For this reason, they are prone to make errors in reading and interpreting the dataerrors that make the data appear to verify the theory. Double- blind methods are therefore doubly important here. In addition, proper statistical procedures are

crucial. It is important to decide ahead of time which run of trials will be considered preliminary, which run will be the real thing, and how many trials will be included in each run. If this protocol is not followed, the following may occur: You are testing the apparatus and rehearsing the personnel. While doing so, you have a run of positive results obtained strictly by chance. Encouraged by this, you write them down as part of the experimental results. The fallacy is that you started to record the experimental results after recognizing that these results were better than chance. These false positives will thereafter distort the results of the entire experiment. For this reason you must agree ahead of time when data recording shall begin, and also when it shall stop. It

is unfair to make the decision to stop just after you have had a good sequence of positive results. We all know there is a finite chance of randomly throwing ten heads in a row. Every gambler knows how important it is to stop while he's ahead, but experimenters must not employ that knowledge. These comments apply as much to the hard sciences as to parapsychology. Even in a science as hardheaded as physics, it is good psychology to be skeptical when doing experiments in which the desired signals are difficult to distinguish from background noise, especially when the theory goes against common sense or established knowledge. You may be doing a Nobel prize-winning experiment, but if

you fall in love with the theory so much that your interpretation of the data becomes overeager, you may forfeit the prize and end up in total disgrace. I have seen a physicist give a lecture on measurement of gamma ray energies in which he claimed that every high point on the curve represented a gamma ray, even though his instrument could not possibly separate gamma rays so close together on the energy scale. The man wanted to find lots of gamma rays, so he did, forgetting what he should have known about the resolving power of his instrument and about statistical fluctuations in gamma ray detection. This experiment added nothing to the accumulated body of physical knowledge and wasted a lot of taxpayer

money. A more famous example of misinterpretation based on wishful thinking was the case of the Martian canals. In 1877 and in 1881, when the planet Mars was especially close to Earth, the Italian astronomer Giovanni Schiaparelli made careful studies of the surface features of Mars and reported that he saw canali (channels)straight lines that joined together in complex patterns on the face of the red planet. Schiaparelli was working purely with visual observation; astronomical photography was not yet in a state advanced enough to be of any assistance. The Englishspeaking press immediately translated canali into "canals," leaping to the

conclusion that Mars must be inhabited by intelligent, canal-building beings. Since other astronomers found it difficult or impossible to find the canals through their own observations, a controversy arose as to their existence. The French astronomer Nicolas Flammarion quickly joined the fray on the side of the pro-canal forces. Flammarion, a colorful personalty, believed that all planets supported life. He expressed these views in the widely read book Popular Astronomy, published in 1879 and translated into English in 1894. Flammarion subsequently went into psychical research, but his greatest influence on the public was through his propaganda for the Martian canals and for extraterrestrial life.

Percival Lowell, a wealthy Bostonian, built his own observatory at Flagstaff, Arizona, and beginning in 1894 spent many years in a detailed study of Mars, taking thousands of photographs during that time. Lowell "saw" even more canals on Mars than did Schiaparelli. Unfortunately, most other astronomers failed to observe the same lines on their photographs. Some of them suggested that perhaps what Lowell saw were creations of his own eyes, aided and abetted by a devout desire to see. Science fiction writers ignored the skeptics, and for many decades the canals of Mars and the civilizations that created them were standard plot devices for

writers of interplanetary fiction. However, reality finally caught up with fantasy. When earthlings finally landed their space vehicles on Mars, there was nothing to be seen that looked remotely like a canal or a straight channel. Martian canals abruptly vanished from science fiction and from books on popular astronomy. It was one of the few times in history when discovery of the truth completely silenced the fantasists. (Except for a few die-hards who think that our space exploits are merely hoaxes.) 3. Occam's Razor William of Occam (or Ockham) was born in England toward the end of the 13th century, became a member of the

Franciscan order, and studied at Oxford, where he lectured from 1315 to 1319. His views on the theory of knowledge were remarkably modern. Having led the way toward the mechanistic view of Renaissance science, he is now considered the central figure of 14thcentury Scholasticism.4 It was his belief that much of theology was a matter of faith and not amenable to reason.5 Occam is best known for a maxim which, while not found in his own writings, has become known as Occam's Razor: "Entities are not to be multiplied without necessity." An equivalent rule, which does appear in Occam's writings, is: "It is vain to do with more what can be done with fewer."' This rule was a response to the

theory-spinning of his contemporaries, who kept inventing more and more "ideal objects" that were supposed to represent the reality behind the earthly objects perceptible by humans. But, as Bertrand Russell says, ". . . if everything in some science can be interpreted without assuming this or that hypothetical entity, there is no ground for assuming it."7 Occam's razor, as a guiding principle, was of great use in ridding physics and chemistry of the numerous unobservable fluids posited during the 18th and 19th centuries: caloric, phlogiston, the ether, and so on. At the same time, it hindered physicists in acknowledging the reality of atoms and molecules, for as far as most 19th-century physicists were concerned,

these were also undetectable and therefore unnecessary entities. In time, however, good evidence for their existence was found and physicists accepted them as necessary. What is the present status of Occam's razor? It is certainly not an empirical natural law; it is, rather, a heuristic principle that leads scientists along the path of the most efficient investigations. In urging us to keep the number of fundamental particles and forces to a minimum, its spirit still influences modern particle physics, for it was the feeling of being inundated by too many entities that made us uneasy when the number of mesons and baryons began proliferating without end during the 1960s. The

invention of the quark was a neat way of reducing the number of fundamental particles. Similarly, the current search for a unified field theory is motivated by a desire to reduce the four fundamental forces to one. In recent years, Occam's razor has been interpreted in a number of ways that give it meanings different from the one Occam originally intended. One current interpretation is: "If two theories equally fit all the observed facts, the one requiring fewer or simpler assumptions is to be accepted as more nearly valid." On close examination, this interpretation of Occam's razor appears to be more a convenient prejudice than a useful rule. If

two theories equally fit all the observed facts, then it makes no difference which of the two theories you choose. One of them is no more valid than the other. It is true that you would probably prefer to use the theory with the fewer or simpler assumptions, or with the simpler mathematical equations, but that has nothing to do with validity. Occam's razor has sometimes been watered down to the point where its interpretation no longer has meaning. One recent wording of the rule claims that "the simplest explanation for an observation is most likely to be the correct one."8 This statement is too oversimplified to be of much use. First, the "simplest explanation" means different things to different people.

A religious fundamentalist finds divine creation to be a simpler explanation of the world than the long, tedious workings-out of geology and evolution. Secondly, if we are merely trying to explain one observation, the simplest explanation is generally not the correct one. For example, Bohr's theory of the atom allows us to calculate the correct wavelengths of the light emitted from the hydrogen atom. It is the simplest theory that explains the one observation. However, it is not the most correct theory, since Schroedinger's theory of quantum mechanicsa much more complex theoryexplains not only the wavelengths of emitted light but many other observations as well that the Bohr theory cannot begin to explain. For example, using the Bohr theory it is

impossible to explain why two hydrogen atoms join to make a hydrogen molecule. By contrast, this explanation readily arises from the Schroedinger theory, which is therefore preferred.9 For this reason it is important to keep in mind, when invoking Occam's razor, that we are trying to explain all the observations, not just some of them. The original formulation of Occam's razor is the only one that carries any kind of rigor, for it does not depend on the vague idea of "simplest explanation." It quantifies the concept of simplicity by specifying that in creating a theory to explain a set of observations, we try to use the smallest number of necessary elementary entities.

Occam's razor, while not amenable to scientific proof, serves as a useful rule of thumb, especially when we are faced with questions such as: "How was the faith healer able to tell that Nora was pregnant even before the doctor told her?" The implication is that psychic forces were at work, since no other explanation would suffice. But before appealing to the paranormal in order to explain this event, let us first perform two precautionary tests: 1. We must assure ourselves that the event actually took place before trying to explain it. The person who asked me the above question had read of the faith healer in a book devoted to "things of this kind" and waxed indignant at my suggestion that

the author might have been lying. Yet authors have been known to lie, so this must be considered a possibility. 2. Next, if we are sure that the event happened, we try to explain it by normal means, psychic forces being an entity that may not be necessary. The faith healer, for example, might have been an adept reader of facial expressions, which occasionally give signs of pregnancy even before the prospective mother is aware of her condition. Or he might have made a clever guess that turned out to be right. What tends to distort the statistics in cases of this kind is that correct guesses tend to be remembered, while incorrect guesses do not. Therefore, since at least two normal explanations are possible, an extension of

hypothetical entities into the paranormal is unnecessary. In a case such as this, nothing can be concluded rigorously without a real investigation into what actually happened. Very often an explanation is demanded for events that may be no more than literary fictions. If we are asked to explain how Moses caused the Red Sea to part, we would first like to know what really happened before we expend a great deal of energy and time trying to explain it. And while it is great fun for Baker Street irregulars to analyze the activities of Sherlock Holmes as though he were a real person, it must be kept in mind that he was only a character in a story. This suspension of belief is difficult for people

who tend to trust in the implicit truth of the printed word and who find the world of the soap opera more real than the world of reality. Notes 1. M. A. Rothman, "Things That Go Faster Than Light," Scientific American, 203 (July 1960), p. 142. 2. H. M. Morris, The Scientific Case for Creation (San Diego: Creation- Life Publishers, 1977). 3. K. R. Popper, The Logic of Scientific Discovery (New York: Harper Torchbooks, 1968), p. 41. 4. E. J. Dijksterhuis, The Mechanization

of the World Picture (Oxford University Press, 1969), p. 164. 5. I. Asimov, Asimov's Biographical Encyclopedia of Science and Technology (New York: Doubleday, 1972), p. 60. 6. B. Russell, A History of Western Philosophy (New York: Simon & Schuster, 1945), p. 472. 7. Ibid. 8. E. A. Schneour, "Occam's Razor," Skeptical Inquirer, 10 (Summer 1986), p.311 9. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics, vol. 3 (Reading, Mass.: Addison-Wesley,

1965), p. 10.18.

8. LIVING SKEPTICALLY 1. Proper skepticism The worst kind of skeptic is the person who believes nothing, and as a result is willing to believe anything. Such a person thinks that since nothing is known for sure and everything we believe now may change in the future, we may eventually be able to go faster than the speed of light, discover time travel, or find that ESP is real. Anything is possible. I have met this person. I once asked him: "We now know that the earth has a spheroidal shape. Do you believe that two hundred years from now we will think something different?" His reply was

evasive. I got the impression of a person afraid to hold a firm opinion about anything, who walks on shifting quicksand, without a solid grounding of fundamental knowledge. The proper skeptic is a pragmatist, a person whose knowledge is based on experience and observation, who knows the difference between belief and knowledge and remembers where beliefs come from and how knowledge enters the mind. Judicious skepticism requires us not only to look at unsubstantiated claims with a jaundiced eye but also to change our opinions about the world when sufficient evidence has been presented. Here are some steps in the development of

proper skepticism: 1. Don't believe everything you read or hear. Writers and speakers may be mistaken or deludedeven some very famous ones. People have also been known to lie. It's much easier to lie in writing than in speech, because you don't have to confront your public personally. Remember that the world of psychic phenomena and faith healing abounds with con-persons. Where money is to be made (as in the selling of books to the gullible) the sensible customer moves with caution. Don't even believe everything you see not without independent confirmation. Your own eyes may lie to you. You may

think you see a psychic bend a spoon without touching it, but look again. Sleight of hand, optical illusions, and hallucinations are facts of life. When you see things or hear sounds that do not make sense, remember: it's not necessary to be insane in order to have hallucinations. A tendency to believe in the literal truth of the printed word is a symptom of gullibility and a hallmark of the religious fundamentalist. People with impaired reasoning ability and little firsthand knowledge of the world tend to rely on quotations from erudite references whether or not they are relevant or accurate. A believer in Joseph Newman's energy machine tried to prove to me that the device was based on a valid theory by

quoting the writings of Michael Faraday. He had not enough sophistication to understand that any modern textbook on electromagnetism will provide more authentic information than that obtainable from Faraday. This is not to denigrate Faraday's abilities. We simply know more about electromagnetism now than anybody did a hundred and fifty years ago. Quotations from a classical authority must always be looked at with a full measure of skepticism. Even professional scientific journals have to be read critically. If the journal is one in which all papers must be refereed before publication, then the contents are fairly reliable, particularly in the hard sciences like physics and chemistry,

where experimental results are readily duplicated. In softer sciences such as biology and psychology, faking of results is easier to do and is harder to catch. As a result, some dreadful cases of scientific fraud have made headlines in recent years. That they make such prominent headlines indicates how rare they are and how excellent scientific ethics is in general. In unrefereed journals, anything goes. (The Journal of Parapsychology, mentioned in Section 4.5, is a special case.) Physics, thought of as the most exact of the sciences, is also not totally free of flummery. One journal on foundations of physics published a paper purporting to prove theoretically that elementary particles have consciousness. This theory

was experimentally disproved in the next issue. I suspect the whole thing was a joke. On a more serious matter, note the great care with which claims of hightemperature superconductivity have been presented in recent journals. Here is a prime example of searching for small effects with experiments that are difficult to replicate. Nobody will believe the results until several laboratories have independently done the same experiments and achieved the same results. Of special interest is the Bulletin of the American Physical Society, which publishes one-paragraph abstracts of tenminute papers presented at meetings of the American Physical Society. The Bulletin allows anything submitted by any member

to appear, as long as it is within the required length limit. The result is that in almost every issue of the Bulletin, a small number of crank papers appear, expounding grandiose theories of the universe that are comprehensible only to the author. New particle theories and disproofs of Einstein's relativity theory are among the favorite topics. (A recent issue had a paper proving, with calculations based on revelations in the Koran, that the universe is precisely 16.427 billion years old)) Papers like these are written by modern counterparts of the 19th-century paradoxers, who spent futile lives trying to accomplish the trisection of the angle and the squaring of the circle, feats that mathematicians had already rigorously proved to be

impossible.2 What distinguishes such crank papers from legitimate ones? There are a number of tipoffs, the first being the grandiosity of the theory. Second is the paucity of facts. Third is a specialized vocabulary that only the author understands. Admittedly, it may be difficult for a lay reader to distinguish strange words such as "quarks" and "leptons" (which are legitimate) from others such as "M-fields" and "orgones" (which refer to things that have never been observed), but a professional reader usually can. After all, a physicist should know the vocabulary of his profession, as well as the nuances of style in which papers are written. Even so, a physicist in one specialty may find it difficult to judge

papers written in another. As for nonprofessionals, they don't have a chance. To confuse the issue further, sometimes one must decide whether the author is serious or is kidding. There are rumors that some of the crank papers appearing in the Bulletin of the American Physical Society are written deliberately for purposes of entertainment and amusement. However, this is not a matter to be taken lightly. In the 1950s, a disgruntled paranoid, angry that physicists had not taken seriously his self-published pamphlets concerning a new theory of elementary particles, walked into the offices of the American Physical Society and shot to death the secretary as she sat

behind her desk. 2. When anybody makes a claim concerning a supposedly paranormal event, determine, if you can, whether or not that event actually happened. I recently witnessed a press conference held at a house in West Pittston, Pennsylvania, whose owner claimed that he had been repeatedly attacked by one or more succubi within his house. The assembled reporters, many of them well-known television news personalities, flew into a concerted rage when it turned out that the demon's victim was unable or unwilling to present any of the physical evidence he had promised. With audio and video tape recorders so easily available, he had no excuse for not having permanently

recorded any sounds or sights. If a thing makes a noise, then it can be recorded, and if it can be seen, then it can be photographed. Therefore, if no recordings or pictures are provided, there is no reason to believe that anything happened. On the other hand, sounds and sights are easily faked, as any special-effects artist can tell you. How, then, can one decide whether presented evidence is legitimate? This is a nontrivial question, to which there is no simple answer. Criteria of internal consistency can be helpful: Do the shadows in a photograph go in the right direction? Is it possible to estimate the size of the UFO in the photograph? These problems are similar to those encountered in any investigation of art fraud: Do the

brush strokes in a painting match those in others known to be genuine? Are the pigments characteristic of the artist's time period? In the case of an artifact like the Shroud of Turin, we must ask: Does radiocarbon dating reveal an age consistent with the claimed origin of the cloth? (Significantly, the owners of the shroud have been resistant to the protocols recommended by scientists for radiocarbon datingthe one test that would unambiguously reveal the most important fact about the cloth, namely, its age.) There are also criteria of external consistency: Do different instruments and observations agree with each other? In

general, the questions relevant here are those involved in any kind of scientific research. There is no substitute for technical expertise, broad experience, and detailed investigation. 3. If a claim is made for an unusual experience or observation, try to explain it by normal phenomena before invoking paranormal phenomena. (See Occam's Razor, Section 7.3.) If you see something you don't understand, the probability is that you simply don't have enough information or knowledge to understand what it is that you saw. Claims of UFO sightings have, in the great majority of cases, been shown to be viewings of phenomena that were in no sense unusual or abnormal. Bright planets, such as Venus

or Jupiter, reflections from aircraft windows, miragelike images of distant car headlights, balloons, and space launchings have all contributed to UFO lore. "But," we are always asked, "what are we to make of the small fraction of sightings which remain unexplained?" Rationally, the only thing the unexplained sightings prove is that not enough information was available for us to draw a conclusion. None of this should be taken as proof that visitations by vehicles from other planets are totally impossible. The proper skeptic merely doubts that the UFO hysteria of the past few decades is more than one of the many mass delusions that have swept through the public mind like tidal waves in previous centuries. The impossibility of

faster than-light travel makes extreme skepticism the appropriate response to claims that UFOs are space ships reaching us from other star systems.A prime example of psychic belief arising from misinterpretations of perfectly normal phenomena is the case of Kirlian photography, "invented" by Semyon and Valentina Kirlian in 1939 and publicized during the 1970s by the higly credulous Sheila Ostrander and Lynn Schroeder.3 As described by the authors, Kirlian photography is performed by clamping an object such as a plant leaf together with a sheet of photographic paper between a pair of flat plates to which a high frequency alternating voltage is applied. In the words of Ostrander and Schroeder, ". . . a high frequency field is created

between the clamps which apparently causes the object to radiate some sort of bioluminescence onto the photo paper." While the authors find it necessary to explain their observations by some kind of psychic energy field (which they camouflage with the term "bioluminescence"), it is immediately obvious to any physicist with experience in high-voltage phenomena or plasma physics that what he is seeing is nothing more than an ordinary electrical discharge (also known as a gaseous, or corona, discharge). It is essentially the same phenomenon that you see in any neon sign; light is given off by the atoms of a gas after they are ionized by the high-voltage electric field. Obfuscation is created by the authors when they speak of high-

frequency fields instead of high-voltage fields. The voltage is more relevant to the phenomenon than the frequency. By omitting mention of the actual cause of the light, they make it easier to attribute what they saw to psychic phenomena. Many of the reports of Kirlian photography that subsequently appeared in the popular press conveniently failed to mention the application of electric fields at all! In this way, what starts out as a normal electrical phenomenon is translated by the omission of a few words into a paranormal phenomenon. This propaganda is made effective by the scientifically uneducated person's belief that electricity is essentially magic in the first place. (It is not clear whether the confused writing of Ostrander and Schroeder is due to an

ignorance of physics or represents a purposeful attempt to deceive the public.) 4. Don't underestimate the power of coincidence. We have all had the experience of receiving a phone call from a person we have just been thinking about. Then we think: it must have been ESP. Or perhaps the synchronicity of Carl Jung. It's much more fun than the simpler explanation: There is a finite probability that you thought of this person for ordinary reasons and that this person decided to call you at approximately the same time for perfectly unrelated reasons. The probability of this coincidence occuring may be small, but things with small probabilities often happen. The chance of getting a royal flush in poker is small, but

it is no smaller than the chance of getting any other particular set of five cards. The reason the royal flush comes rarely is that many more combinations other than the royal flush can be chosen. You just never remember all the times you thought of a person and he or she did not call, or all the times you wanted a royal flush and didn't draw it. In addition, we must remember that there are over 200 million people in the United States. Therefore a random event with a probability of only one chance out of 100 million is quite likely to happen to somebody. As far as each person is concerned, this is a very unlikely event, but since there are so many people who are possible targets, it is bound to happen to somebody. Then that person reports it to the newspapers as

something that must have been caused by ESP or other paranormal phenomena. 5. If a claim is made for an experience that violates one or more of the laws of denial, then extend your skeptical antennae and be doubly cautious. Watch out for anything that requires creation of energy or the transmission of energy from one place to another by means unknown to science. Better to buy the Brooklyn Bridge than invest money in a perpetual motion machine. When somebody says he has a way of transmitting messages directly from one mind to another, insist that he explain exactly how electrons in the receiving brain are activated to record the information. When someone claims he has performed an experiment that proves the

existence of ESP, don't believe it before carefully examining the experimental design, the data, and the actual performance of the experiment. Remember that positive results in parapsychology experiments are always obtained by believers, never by unbelievers. In particular, remember that none of these experiments work when a professional magician is in the same room.4 And don't believe explanations that insist that the presence of the unbeliever caused the experiment to fail. (Of course it did, but not for the reasons given.) As we have repeatedly emphasized, the laws of denial are the heart of the matter. Virtually all of the paranormal phenomena that have attracted attention during the past

century have involved the violation of one or more of the laws of nature that say: You can't do this! Those attracted to the paranormal resent scientists' telling them what they cannot do. The laws of denial are the most fundamental laws of nature. For this reason, clear evidence that one of these laws is violated by a paranormal phenomenon would win an immediate Nobel Prize. (Remember Lee and Yang's 1957 Nobel Prize for nonconservation of parity.) Since the stakes are so highand since conservation of energy and momentum has been verified by every experiment in physicsclaims of their nonconservation must be examined with the utmost rigor. To reiterate: If anyone claims to have seen a phenomenon that requires violation of conservation of

energy or of some other law of denial, and if that person has no rational explanation, or if the explanation given involves supernatural elements, then proper skepticism is the only suitable response. 6. Be skeptical of the opinions of experts outside their areas of expertise. Several professional engineers testified that they had examined the Newman machine and had measured more energy coming out of it than going in. The National Bureau of Standards disagreed. Power measurements in electrical systems carrying pulsed currents and voltages is a specialized subject. It can't be done with ordinary ammeters and voltmeters. Clearly Mr. Newman's experts were operating outside their areas of capability

when they took up the task of evaluating his device. A number of important physicists (including one of the world's leading quantum theorists) made fools of themselves while "investigating" the tricks of Uri Geller, the spoon bender. James Randi, the magician, has repeatedly emphasized the fact that magicians, being experts at trickery, are the ones most qualified to investigate tricksters such as Geller. They expect to see fakery, and they know where to look for it. While this book has emphasized the physical reasons for skepticism, it is important to recognize the contributions of professional magicians to the exposure of

fraud. The reason is not simply their expertise in recognizing trickery. In the world outside science, the statements of magicians are more understandable and hence bear more weight than the work of physicists. The reason is simple: It takes a long time to learn physics. One lecture will not convince a lay audience that conservation of energy forbids ESP. Those who have never studied physics cannot comprehend what the physicist means by energy, and they misunderstand what is meant by conservation. They think it has something to do with the environment, or using energy-efficient cars to save gasoline. For this reason the public is more convinced by an Amazing Randi demonstration than by a physics lecture. About this I am not complaining

or being jealous. Both physicists and magicians are necessary: Physicists must continue to study the fundamental laws of nature and make their knowledge more certain. Psychologists, anthropologists, magicians, and other investigators of curiosa must continue to expose false teachings by psychics, advocates of ESP, astrologers, faith healers, and other perpetrators of delusion. These two groups work at opposite ends of the same endeavor, and both are needed. If you can't understand the physicists, you can understand the magicians. And the most convincing argument remains the uncollected $10,000 offered by James Randi for any act of ESP or other paranormal event verified in his presence. In well over ten years nobody has

produced a convincing claim for the reward.5 7. Finally: Learn to postpone belief if you encounter something you don't understand If you see something or are told something you can't explain, don't think it is necessary to believe in one theory or another immediately. If somebody says you must believe in either creationism or evolution, and you know nothing about either theory, then wait until you learn the facts about both before you decide. It is not necessary to have an opinion about everything in the world. The trick in reading science fiction and fantasy is to suspend disbelief in order to enjoy the story. But in dealing with the

mundane real world, the safest bet is to suspend belief until you have the facts. Unless, of course, you prefer to think of the world as fantasy. That's all right as long as you are amused. 2. Answering questions We are now in a position to answer the questions posed in Section 1.1 under the rubric "How do you know. . . ?" We will, as much as possible, explain the logic of our answers, and will pay special attention to the role of the laws of denial. 1. How do you know a fetus is not a human being? This is not, strictly speaking, a scientific

question, but rather a semantic or linguistic one, since the answer hinges on the definition of "human being." Some people would like to define a single cell as a human being, which is like saying that an acorn is an oak tree. Logic of this kind is like Humpty Dumpty thinking: "When I use a word . . . it means just what I choose it to meanneither more nor less."' One reason given for defining a single cell as a human being is that this cell contains all the DNA molecules of a human being. But that would apply to each cell of my body, and since I don't think that one cell from my little finger is equal to an entire human being, I look for a more reasonable definition. Let us define a human being

physiologically and psychologically, rather than metaphorically. A human being possesses a nervous system that enables it to think and behave like a member of Homo sapiens. How do we know when a person starts to think? The human nervous system does not begin to develop synapses until the third trimester of fetal development, and it continues to develop synapses after birth. It follows, then, that a fetus is not capable of thinking and feeling like a human being until at least the third trimester, when it begins to build a nervous system capable of processing thoughts. This definition avoids unobservables such as "soul" and "spirit." 2. How do you know a fetus doesn't feel anything?

The answer to this is the same as to the previous question. The fetus doesn't feel anything until it has a nervous system capable of transmitting electrochemical impulses from one neuron to another. 3. How do you know when life begins? Again, this is a rhetorical question, not a scientific question. Both egg and sperm are alive before the fertilization that produces a new person. Therefore "life" does not begin anywhere during the process. It was already there. What the questioner generally means is: "How do you know when a particular human life begins?" The answer is the same as the answer to the two previous questions: it's a matter of definition rather than of natural

law. 4. How do you know smoking causes cancer? At present our knowledge is based on epidemiological studies: Statistics show that smokers die of cancer (and of many other diseases) sooner, on the average, than nonsmokers. Some (mainly representatives of tobacco companies) complain that this kind of evidence is not "causal," but merely shows a correlation, which can be due to causes other than smoking. But statistical studies that relate the death rate to the amount of smoking are very close to being causal. Longitudinal studies that follow smokers for long periods of time are quite persuasive,

especially when they show that those who stop smoking have a greater life expectancy than those who continue to smoke. Extreme skeptics will not be satisfied until molecular biology identifies the particular ingredient in tobacco smoke that causes a particular mutation in a particular gene that causes the cancer. That is in accordance with our insistence on identifying the interaction that conveys change from one atom to another. In time, the direct cancer- causing interaction will no doubt be identified. In the meantime, it must be noted that the statistics relating tobacco smoke to cancer are much better than the statistics purportedly proving the existence of ESP. The life expectancy of

smokers is ten years less than that of nonsmokers. This is a very large effect, implying that the average death rate of smokers is over twice that of nonsmokers. 5. How do you know radiation from a microwave oven or a computer terminal (or a digital watch) will not harm the user? The public has become so alarmed about nuclear radiation that it tends to lump all kinds of radiation together into one menacing whole. The word radiation has become loaded with portent. However,the various kinds of radiation in use differ greatly from one another. In general, the biological effects of radiation are determined by two factors: the wavelength

and the power density. The wavelengths associated with radiation from microwave ovens and computer terminals are much longer than those involved in nuclear radiation, and the quantum energies are correspondingly less. Microwave energy can do little more than wiggle the organic molecules in the body back and forth. If you receive large amounts of microwave energy, you tend to get cooked. However, microwave ovens have metal boxes (the glass doors are covered with perforated metal screening) that prevent all but a very small amount of energy from escaping. That there is little agreement regarding the biological dangers of microwave ovens indicates that their actual effects, if any, are very small.

Radiation from computer terminals should consist mainly of low- energy, longwavelength X-rays. The glass screen is enough to absorb much of this radiation. What gets through may have some biological effect, but this effect is probably less than that produced by merely crouching motionless in front of a terminal for long hours. There is no denying the severe neck-muscle cramp hazard associated with writing on a word processor. I can personally attest to that. As for digital watches, the only radiation emanating from them is that generated by an overheated imagination. The above analysis is based on what we know about the properties of radiation and their interactions with matter. It is based on the

laws of permission and hence always involves some uncertainty. At present, considerable controversy exists about the effect of low-level radiation on humans. Even the low-frequency electromagnetic fields produced by electrical wiring is suspect. While concerns about this are legitimate and research should be done, particularly on the effects of high-power transmission lines, many of the reports I have seen on the biological effects of lowlevel radiation strike me as having certain characteristics in common with parapsychology research, in particular the search for minuscule effects embedded in large statistical fluctuations, inspired by hopes and fears that have little to do with scientific knowledge. One widely quoted study on the cancer risks of living near

power lines or transformers reports no actual measurements of the electric or magnetic field strengths in the locations of interest and accordingly no attempt to correlate biological effects with measured field strength. A quick calculation predicts that the magnetic field strength under a high-voltage power line should amount to a few gauss. It would be surprising to find that such a low magnetic field at a frequency of 60 Hz (the earth's magnetic field strength is about half a gauss) could be a cause of cancer. 6. How do you know that radioactive wastes can be stored safely for a thousand years? This is something we don't know for sure.

An answer to this question must be based on laws of permission, which are not precise. However, we know as much as the ancient Egyptians knew about the characteristics of building materials, and some of their underground storage chambers still contain artifacts in a good state of preservation after three or four thousand years. Surely we can do as well. 7. How do you know that the Three Mile Island nuclear plant won't blow up as soon as it rebuilt? We cannot prophesy with total certainty that TMI will not blow up as soon as it is rebuilt. Nor can anyone prophesy with assurance that it will blow up. (Assuming that the damaged reactor is rebuilt at all.)

However, no nuclear power plant has ever blown up as soon as it was built. Why should TMI be different? (French nuclear plants have operated for many years without incident.) Again we see the difficulty of making certain prophecies based on the laws of permission. 8. How do you know the earth was created 10,000 years ago? We don't know this at all. Creation of the earth 10,000 years ago (give or take a few thousand years) is a claim made by creationists, who present no physical evidence to support it. 9. How do you know the earth was not created 10,000 years ago?

We know this because we can determine the age of fossils and minerals in the earth's crust by radionuclide dating and by other means. The fraction of particular lead isotopes in a bed of uranium tells us how long ago the uranium was laid down there; the time is in the billions of years. There is no evidence that the rate of radioactive decay has changed during the last few thousand years. A sudden change in radioactive half-lives would have to be accompanied by a corresponding change in the strengths of the fundamental forces within the atomic nuclei. Asserting that such a change took place is not enougha good physical reason for it must also be given. Creationists point their fingers at physicists and say, "All your theories are based on an assumed uniformity of natural

laws," as though that assumption lacked logic. If creationists want to assume that the laws suddenly changed a few thousand years ago, they must give some reason for that hypothesis. Any effort to invoke supernatural reasons, however, takes us out of the realm of science. 10. How do you know that the laws of nature existing thousands of years ago are the same as those that hold today? We can look at the light coming from stars thousands of light- years away and see that they have the same kind of spectra as the light coming from the sun. This evidence indicates that the processes taking place within stars are the same today as they were thousands and millions of years ago.

The hydrogen spectrum of a million years ago is the same as the hydrogen spectrum of today. A diamond created in the earth's crust millions of years ago has the same structure as a diamond created in a modern laboratory. Periodically, attempts are made to determine whether certain natural constants (e.g., the gravitational constant, or the ratio of electrical to gravitational field strength) have changed gradually since the beginning of the universe. Such efforts have failed. Evidence of this kind convinces us that the laws of nature do not change with time. Claims that natural laws have changed in the recent past in such a way that the changes could not be observed are empty, for (as described in

Sections 7.1 and 7.2) you can explain anything you want with this kind of reasoning. Being unfalsifiable, such claims cannot be considered part of science. 11. How do you know that a creature as marvelously complex as a human being could have been created without a guiding plan from a higher power? All sciences are based on the assumption that everything that happens in the universe is determined by natural laws. Supernatural forces are not part of science and do not appear in biology textbooks. This is not because biologists know that human beings evolved without supernatural guidance, but because the

assumption that nature is orderly is necessary to the existence of science. Many scientists privately believe in higher powers, but rely on natural laws to analyze the behavior of physical objects. Their success over the centuries indicates that the hypothesis of an orderly nature governed by laws is a good one. In science, the explanatory power of a hypothesis is an important criterion of its usefulness and validity. Conversely, lack of explanatory power is a fatal flaw. A theory that assumes a guiding plan from a higher power explains nothing and leaves unanswered a host of questions raised by the hypothesis itself. For, having assumed a higher power, the theory must then explain how this higher power originated,

how it obtained its knowledge, how it was able to guide the creation of human beings out of chaos, and by what physical mechanism this guidance took place. Such a hypothesis merely shifts the questions to a different level of existence, thereby putting the answers totally out of reach. The concept of evolution based on natural law at least offers the hope that answers can be found within our own universe. The marvelous complexity of the human being is not in itself a reason to disbelieve that we evolved by natural means. It merely guarantees that understanding how this evolution took place will be a difficuilt task. 12. How do you know we won't find a

way to travel faster than light? This is a complex question. To answer it fully requires the use of several parts of the principle of relativitythe constancy of the speed of light, the sameness of the laws of nature in all inertial reference frames, and the principle of causality. The Appendix outlines a strong proof that faster-than-light travel is impossible. Many optimists still cling to the hope that a way around these objections may be found. Perhaps we can leave this fourdimensional universe through a black hole and return through a white hole, thus circumventing the restrictions of space and time. That would be hard to do, however, without reducing yourself to an

inchoate mass of disconnected elementary particles. 13. How do you know antigravity is impossible? Electrical shielding is possible because there are two kinds of electrical charges: positive and negative. By contrast, there is only one kind of mass. Therefore gravity, an interaction between masses, has only one form: a universal attraction. As a result, there is no way of arranging masses so that they do anything but attract each other with the gravitational force. They can't produce a gravitational shield or a repulsive gravitational force. (See Section 6.1.) Here we see a peculiarbut commonsituation: a concept invented as

sheer fantasy by a science fiction writer taking on a life of its own. Science fiction readers begin to believe in it; then scientists must spend time deciding whether this concept makes any sense from a scientific point of view. This is not to say that we must offhandedly dismiss these ideas just because they originated in science fiction. After all, the ideas of space travel and of robots started this way. The job of science is to decide which few out of many fantasies may be realized. (More on this in Section 8.3.) 14. How do you know that the laws of nature we believe true today won't be found false in the future? This question is invariably brought up

when you try to prove that faster-than-light travel is impossible. It is a question that must be answered specifically for each individual law. Whenever a law of nature is verified by experiment, we must examine the experiments and make note of the domain in which that particular law has been verified. For example, conservation of energy and momentum has been verified for all four fundamental interactions. Therefore we know that these laws hold true for any events mediated by these interactions which means all physical events that we can observe. We know that these laws have not changed since the beginning of the universe a period of over 15 billion years. Since there is nothing special about the present, there is no reason for us to think that the

laws will change in the next hundred or thousand years. Why should they go unchanged for 15 billion years and then suddenly change just because we happen to be here now in this little corner of the universe? In short, in any situation (or domain of validity) where a law of nature has been experimentally verified, there is no reason to believe that it will change in the future. Belief in the changeability of natural law is connected with an unconscious belief in human omnipotence and a human desire to make anything possible. 15. How do you know there is no life after death?

From a scientific point of view, the concept of life after death is an oxymoron. Biologically speaking, after an organism dies its matter is no longer alive, and that is all there is to it. Life after death has meaning only within the context of vitalismthe idea that some kind of spirit ensouls and ultimately survives the human body. Before believing in life after death one must first believe in an immaterial spirit that contains the persona and is able to transmit its characteristics into the future. As far as science is concerned, this spirit does not exist. Therefore a scientist answers the question simply by saying that the concept of life after death has no place within the framework of current scientific knowledge. To show that there is life after death, one would have to demonstrate

physically that there is something within the human body that can exist separate from it and that can continue to exist after the material body decomposes. (The same remarks apply to reincarnation.) 16. How do you know ESP is impossible? Here is a case where the simple application of the laws of denial gives an immediate and clearcut answer. Transmission of information through space requires transfer of energy from one place to another. Telepathy requires transmission of an energy-carrying signal directly from one mind to another. All descriptions of ESP imply violations of conservation of energy in one way or another, as well as violations of all the

principles of information theory and even of the principle of causality. (See Section 6.1.) Strict application of physical principles requires us to say that ESP is impossible. A "softer" statement would be that the probability of ESP being real is very small. 17. How do you know that paranormal phenomena like teleportation, telekinesis, and poltergeists are impossible? These phenomena require violations of conservation of energy and of momentum, unless you are able to demonstrate the existence of a physical interaction capable of setting distant objects into motion. 18. How do you know parapsychology is

a pseudoscience? A pseudoscience is a false science that pretends to be real. (At one time science fiction was called pseudoscience fiction, to the chagrin of science fiction fans.) As we have shown (Section 6.1.), parapsychology deals with phenomena that cannot take place within the framework of the four fundamental interactions, and that violate laws of denial, such as conservation of energy, and the laws of information theory. Insofar as parapsychologists ignore this defect or seek to excuse it by invoking psychic energy and the supernatural, parapsychology must be considered a pseudoscience. For parapsychology to be considered a real science, it would have

to show that (1) the phenomena it studies are real, and (2) it has a scientific explanation for these phenomena, that is, a theory whose general principles relate these observations to underlying mechanisms. What distinguishes a science from a set of disconnected observations is the underlying theory, which knits all the observations into a unified conceptual structure. Many parapsychologists still hold to a dualism of mind and body. However, since mind has never been observed separate from the nervous system, and since there is no theory to describe such a separate mind, the idea of a mind-body duality cannot be considered part of science.

So much for the proof that parapsychology is not a science. Does this also mean that parapsychology is a pseudoscience? Not necessarily. It could possibly be a prescience, groping to become a science. To show that it is a pseudoscience, we would have to show that all experiments done in parapsychology so far have given negative results and that all future experiments will do the same. Since we have not done that, we cannot with 100% confidence say that at the present timewe "know" that parapsychology is a pseudoscience. On the other hand, present trends encourage us to anticipate that parapsychology will soon be proved pseudoscientific. Considering the rate of increase in knowledge in molecular

biology and neurophysiology, we can look forward to a not-too-distant future when the human nervous system will be understood in microscopic detail. If and when that happens, it will be apparent that what we call the human mind is an outgrowth of the operation of the nervous system and does not have an independent existence. Mind-body dualism will then be shown to be unnecessary, and parapsychologyto the extent that it depends on that dualismwill appear as a pseudoscience. 3. Fantasy and reality We now come full circle and return to the question raised in the Introduction: How can one delineate the boundary between

fantasy and reality? The answer to this is confused by multiple meanings of the words "fantasy" and "reality." In its purest physical sense, reality is objective reality the universe that is studied by physicists, chemists, biologists, and other scientists. It is the space-time continuum and the objects that fill it. It is the elementary particles and the forces by which they interact. It is planets and the creatures that inhabit them. Objective reality is all the things that exist outside my body, things that I can observe through my senses or through other instruments at my command. Objective reality also includes my body and the signals that circulate through its nervous system. Those signals provide the connection

between objective reality and my perception of it. Carrying information between my sensory organs and the brain, electrochemical pulses register in my memory the presence of objects and events outside my body. When I perceive things happening outside my body, my awareness or consciousness of these perceptions is another kind of reality subjective reality. Only I can know my own subjective reality. That reality also includes perceptions of happenings within my own body. If I have a headache, only I can feel that headache. If you have a headache, you can tell me you have a headache, but it is difficult for me to know whether you really are feeling pain or just telling me a story. Each

person has his own subjective reality, but there is only one objective reality. The concept of subjective reality might be faulted by extreme skeptics as having no more scientific validity than that of psychic energy or of the soul. Behavioral psychologists used to warn their students against "armchair psychology" speculation on such esoteric topics as emotion and feeling. However, a good case can be made for retaining the abstract concept of subjective reality. First, each of us is sure of his or her own subjective reality. We all make similar reports on its existence. I know what my thoughts are, and Inot being a solipsist believe that you all have your own thoughts as well. Therefore it is not hard for me to believe

that subjective reality is a useful and necessary concept. Second, in some cases I can verify what is going on in your mind using objective means. Dream experiments detect the onset of dreaming through the use of eye-movement detectors. A report of a back pain can be correlated with visible muscle spasms. A reported headache can be verified by an X-ray photo of an embolism or tumor. In all of these cases the report of an internal event can be verified by visible external events. But what if the headache shows nothing on the outside? How can we know what a person is really thinking and feeling? In view of the rate of increase of knowledge in neurophysiology, we can (with the help

of some optimism) look forward to being able to correlate specific states of the nervous system with specific thoughts. That is, if the presence of a headache corresponds with certain types of pulse codes in certain parts of the brain, then being able to detect and translate these codes would give sure evidence of the headache. Detection and translation of nerve signals might lead the way toward a direct-wired version of telepathy. Even if we fail to go so far as to read a person's mind through wires connected to his nervous system, proof that his thoughts arise solely from the activities of his neurons would give the concept of subjective reality a high degree of plausibility.

A subset of subjective reality is fantasy, consisting of thoughts originating entirely in the mind. The word "fantasy," however, means different things to different people. In its narrowest sense, it denotes a genre of literature having to do with heroes, heroines, sorcerers, ghosts, poltergeists, and the supernatural. Even in this narrow sense fantasy is, for some people, indistinguishable from reality: There are many who take poltergeists seriously, believing that what they have read in works of fiction describes what happens in the real world. In a somewhat broader sense, "fantasy" means imaginative fiction dealing with types of events that have never happened in the world of objective realitytravel

to alternate universes, to distant galaxies, to the past or future, wars between interstellar civilizationstopics usually labeled "science fiction." But since such ideas have little or no likelihood of being realized, they must be recognized for what they are: fantasies that use words and concepts of science to add verisimilitude to the storyto promote the necessary suspension of disbelief. Delete this form of fantasy and very little science fiction would be left. Hordes of people would also be unhappy. These are the people who take offense when I tell them that nobody will ever travel to other stars at faster-than-light velocities. Such people invariably counter with the question: "How do you know that future civilizations won't find a way to go faster

than light?" Belief that all our present knowledge is subject to change is the basis of innumerable contemporary fantasies. In its broadest meaning, fantasy is any thought or image created in the mind from within. This is the meaning employed in this book. Fantasy may be self-directed, as are, for example, my preliminary thoughts about the next sentence I am going to write. Fantasy may be about nothing existing in the real universe: conjectures about a planet inhabited by beings consisting of pure, disembodied thought. (This fantasy consists of two partsfirst, the idea that disembodied thoughts exist, and second, that beings consisting of disembodied thoughts exist.) Or, it may

consist of imaginary events surrounding a real object: a sexual fantasy about a person you have just met at a cocktail party. It may be trivial: imagining what you are going to do on your next day off. It may be an important personal beginning: an adolescent fantasizing about making the Baseball Hall of Fame or winning a Nobel Prize. It may change the course of history: Adolf Hitler dreaming of world conquest while sitting in a prison cell. Fantasies must not be dismissed as useless or unimportant. Every creative idea begins as a fantasy: John Dalton imaging the atom; Kekul dreaming of the benzene ring. Einstein's general theory of relativity began as a fantasy involving curved space and differential equations. Without

fantasy, science would be mere data collection, with no ability to formulate connections and relationships. Every theory is a fantasy until verified. Fantasies are essential even on a practical level. The engineer who puts an equation into his computer to plot the future path of a space vehicle cannot be sure that the vehicle will behave exactly as the equation predicts. To make an exact prediction, he must know the exact mass of the vehicle and the exact thrust of the rocket engine, as well as take into account the effect of the atmosphere, the gravitational effect of the earth's complex shape, the effect of gravitational forces from the sun, moon, and other objects and even then he has still left several

factors out of the equation. The equation, therefore, is a fantasy; the trajectory plotted on the computer screen is an imaginative approximation to reality. It is what the engineer thinks is going to happen. Reality is what the rocket actually does. The two will agree only within some indeterminate margin of error. Fantasy, then, pervades all of science and all of life. It is the imaginative beginning of all scientific theory, the precursor of all experimentation. A person doing an experiment must first imagine the final result he is trying to prove. Even the raw observations involve considerable fantasy. The physicist imagines the alpha particle shooting through a cloud chamber, even though all he sees is a line of water

droplets. It is only by means of other observations with different instruments that he proves that the track is actually caused by an alpha particle, demonstrating that in this case reality and fantasy coincide. When what you think coincides with what happens, then the fantasy is realized. But how do you know what is happening in objective reality? When you look at an apple, what enters your brain is nothing more than a series of coded pulses coming in through the optic nerve from the retina. What is it that gives you the sensation of "round, red apple"? Early childhood is a continuous training in pattern recognition the translation of pulses into images, sensations, and shapes. People who are

born blind and then later acquire sight cannot at first make sense of the signals coming into their brains. They must literally learn to see. People who are born into a language that lacks certain sounds cannot recognize those sounds when they first encounter another language. These experiences teach us that what we see when we look at an external object is an internal interpretation or representation of what actually exists. Thus, all of neural development and much of education is training to recognize and interpret reality. A different type of education indoctrinates people to believe that certain fantasies are more real than objective reality. (For example, the fantasy that one religion has a monopoly

on the trutha belief that obviously cannot be true of all contradictory religions.) Therefore, it is important to be aware of some procedures that help us to distinguish between fantasy and reality: 1. Learning to create accurate internal images from the nerve pulses entering the brain is step one, a process fraught with danger. Neurological defects (e.g., dyslexia and schizophrenia) interfere with pattern-recognition processes. Optical illusions and mirages trick the eyes and distort subjective reality. Does the sun change color at sunset? No, the sun does not change; only the light reaching us from the sun changes. Does knowing we are looking at an illusion always help? Not always. We know that the images on the

television or cinema screen are only those of actors telling a story. This does not prevent the tears that may be shed when a character dies. Knowing that it is "only a story" does not eliminate the emotional response. 2. Since our eyes cannot be trusted, science aids the senses with instrumentation: cameras, spectroscopes, telescopes, recorders of many kinds, etc. Data is analyzed with computers in order to avoid human prejudice. If two or more independent instruments agree in their interpretation of reality, then the probability of their correctness is enhanced. Very rarely is a single observation deemed sufficient to verify a broad and signficant theory. The

nonscientist rarely has access to such instrumentation. However, cameras and tape recorders are now common and can be used. Several people agreeing that they see the same thing helps verify an observation. However, forensic psychologists are familiar with the pitfalls of witness testimony concerning strange and unexpected events. 3. A familiar observation is easily accepted as true, while an unfamiliar and unexpected observation is not so easily digested. Translated into scientific terms. an observation that has been made many times and is supported by an existing theory or frame of reference may be accepted without difficulty. On the other hand, a phenomenon seen for the first time

and backed by no foundation in existing theory or known mechanism is usually met initially with disbelief. In fact, caution is the appropriate response to such an observationnot necessarily disbelief, just caution. For example, I am not surprised to hear somebody across the room talking to me, for two reasons: first, the phenomenon is a familiar one, and second, I know how sound waves propagate through the air. Nor am I surprised to hear voices come out of a television set or see pictures on its screen, for I know how electromagnetic waves are converted into these audio and video signals. The familiarity and repeatability of these phenomena make it possible for scientists to undersand how

they work. On the other hand, I would be very surprised to see images and hear voices that had no apparent source or means of propagation through space. Of such apparitions I have no previous experience or prior theory, which is why I would find it hard to believe that they were actually taking place. My first response would be to ask myself whether I was hallucinating. I would prefer to think that the sights and sounds were originating in my own nervous systemfrom errors in software or hardwarerather than to believe they arrived directly from another mind. It is easier for me to imagine strange events taking place in my own mind than to accept direct transmission of thoughts

from a distant and unconnected source. Another person who knew nothing about the mechanics of information propagation might not be as surprised as I would be. Receiving pictures directly in his mind might be no more amazing to him than seeing pictures on a television. The familiarity of television has prepared him for reception of messages from a distant source. Lacking knowledge of the fundamental mechanism, he may not recognize the vast difference between television and telepathy. 4. To distinguish between fantasy and reality, then, requires firm knowledge: we must know the model of the universe scientists have constructed using the tools

of theory and verification. This is the kind of reality-testing we have pursued in this book. An important aid to reality-testing is the ability to distinguish between what is possible and what is impossible. Anything that is impossible in the real world must be fantasy. However, the converse is not true: not everything that is fantasy is impossible. Clearly, if a person thinks that he is in telepathic contact with aliens living in the Andromeda nebula, he believes an impossiblity and is thus in the grips of a fantasy. On the other hand, a ten-year-old who dreams of becoming a concert pianist indulges in a fantasy that may well be possible. The concept of possibility is in itself fairly complex, and we will have a few more words to say about it in Section 8.4.

5. Having demonstrated some aids to the separation of fantasy and reality, it is necessary to face another question: Is fantasy so harmful that we must always be on the alert for its presence, always searching for that fine line that separates it from reality? The answer is clearly no. We have shown that fantasy is an indispensable part of creativity, both scientific and artistic. Bruno Bettelheim has pointed out the importance of fairy tales in the development of children.? Science fiction, which is mostly fantasy, may be thought of as fairy tales for adolescents, and has important functions in teaching problem solving (both theoretical and practical), attitudes towards technology, and identification with heroic role models. Without fantasy,

there would be little in the way of literature, drama, and music, nor would there be a fashion trade or a culinary industry. Architecture would be function without form. Without fantasy there would be no loveonly sex. In short, all the things that elevate life above the barest, meanest existence would cease to exist. The other side of the coin is that obsession with fantasy can undermine one's ability to function in the real world and result in a considerable waste of resources. An adolescent may spend all his time reading science fiction, neglecting his studies and failing to develop his potential in real science. If he becomes a science fiction professional a writer, artist, or editor then nothing is lost, for a hobby has

develped into an occupation. Otherwise, his total dedication to activity in fantasy represents an obsessionan imbalance between recreation and work. How might we view the activities of parapsychologists, who spend thousands of hours and dollars in a vain search for esoteric knowledge? Would not their time be spent more profitably in expanding their knowledge of reality? Perhaps, but usefulness is clearly not their only concern. Their motives are comparable to the obsessions that drove alchemists, seekers after perpetual motion, angle trisectors, and circle squarers. Each of these fads lasted for hundreds of years, until repeated failurescontrasted with the success of the real sciences

convinced most reasonable people that the chase was futile. The saving grace of most paranormal beliefs is that they are relatively harmless. If a person buys books on UFOs and astrology, the cost may be entered in the entertainment column of the budget. The worst that can happen is that believers contribute some of their income to con artists who enrich themselves by raising the national level of credulity. Fantasies, however, become physically harmful when the victim falls for faith healers, psychic surgeons, and medical quacks. Then he dies, and the rest of society has to pick up the pieces at considerable expense.

In the long run, the greatest harm to society will be created by the false information and antiscientific attitudes spread by merchants of pseudoscience. We are already beginning to see the harm to education caused by the creationists and other fundamentalist know-nothings. Every elementary textbook writer is now under pressure to choose words carefully in order to avoid offending the school boards responsible for the purchase of books. If fundamentalists had their way, biology would be taught as a "mere theory" rather than as knowledge obtained by scientific method. Ironically, those hurt most by fundamentalist restrictions on education are the children of fundamentalists.

Denied a broad education in arts and sciences, indoctrinated with fantasy rather than educated in critical as well as creative thinking, their choice of careers is restricted. It can be predicted that relatively few good biologists, paleontologists, and geologists will graduate from fundamentalist colleges, for how many of their students will be motivated to perform serious research based on a feeling for reality? (For many years few physicists graduated from Catholic universities. This situation has been corrected over the past several decades.) An analogous situation existed in Soviet Russia during the 1930s and '40s, when the pseudoscientific theories of Trofim

Lysenko were supported by the dictator Josef Stalin. Lysenko held that acquired characteristics could be inherited, and because this was state doctrine, it was fatal to hold a contrary opinion. Consequently, useful research in genetics was crippled, and Russian agriculture suffered devastating setbacks because of ill-considered experiments. This condition continued until the death of Stalin in 1953, at which time Lysenko fell out of favor. Years were required to bring Soviet work in genetics back to a respectable level. The example of Lysenkoism is the most important argument against the idea that governments and private pressure groups should dictate what kind of science will be taught in schools.

The best defense against the unreal is reality. The curve of knowledge is surging upward at a dizzying pace. Physicists are learning more about the nature of matter and energy, biologists more about the nature of life. The more we know about reality, the less room there is for pseudoscience. Unfortunately, real knowledge is resisted by a public that reads more about pseudoscience and the occult than about real science. The thrill of the esoteric is more seductive than the hard labor of knowledge. 4. Impossibilities The word "impossible" is one of those terms used loosely and with shifting meanings. When people boast on a

television commercial, "We make the impossible possible," it sounds as though they are perpetrating an absurdity. If you are now able to do something, then it must always have been possible, even though you thought it impossible. The state ment may be simply a metaphor; we can make sense of it if we choose the correct meaning of the word "impossible" out of those given below. 1. The word "impossible" is often used to mean, "We don't know how to do it." If I were to say, "At the present time it is impossible to build a computer that thinks like a human being," I would merely be confessing that I don't know how to do it. Since nobody can point to any fundamental law of nature that forbids the building of

such a computer, we cannot say that the task is impossible for all time. Reasons for total impossibility may be found in the future, but right now the word "impossible" is merely an expression of ignorance. Impossibilities of this nature are often temporary. Once you learn how to perform the task, the impossibility disappears. It is important to use the word "impossible" precisely in order to reduce confusion. If I say, "It is impossible for me to create a living cell from inorganic raw materials," I am telling the truth. On the other hand, if I say, "It is impossible to create a living cell from inorganic raw materials," I may not be telling the truth,

because there is no law of nature that says creating a living cell is impossible. One statement deals with my capabilities here and now, while the other refers to mankind in general. 2. Other impossibilities arise from a lack of resources or means. When you say: "It is impossible for me to go to Europe this year," you are simply complaining that this year you don't have the money. Perhaps next year you will. This is another temporary impossibility. On the other hand, if it should turn out that building a spaceship to visit the distant stars would cost more money than the gross national product, this task would very likely be a permanent impossibility.

(Curiously, nobody in science fiction ever worries about the monetary cost of doing something. If they did, there would be little science fiction left.) 3. Some impossibilities arise because the thing you want to do is too complicated or because it would take too long to do it. Many computations that we know how to do on a computer simply cannot be done because we don't have computers big enough or fast enough. Or, it may turn out that building a computer that thinks like a human would require so many transistors that it would be impossible to squeeze them in a space small enough to make it usable. The complexity of the design might make it impossible for any human or group of humans to keep all the details in mind.

Nature has had billions of years to accomplish the job. For how many years can humans concentrate on one task? 4. True impossibility in the fundamental sense arises when a proposed action violates the basic laws of physics, particularly the laws of denial. These are the laws that were discussed extensively in Chapters 5 and 6. The conservation laws and the laws of relativity have been so thoroughly confirmed through experiments that they provide the strongest basis for skepticism toward any proposal that negates them. In spite of our ability to make a number of very definite statements about what is impossible, many people in our modern

world are peculiarly reluctant to admit that real impossibilities exist. The reason is their tendency to confuse the several meanings of the word "impossible" discussed above. Impossibilities that result from practical limitations are not always easy to prove. The perennial optimists then jump to the conclusion that nobody can ever prove that something is totally impossible. With this logic they construct a paradox: "It is impossible to prove that anything is impossible." Never suggest that there is not enough money or natural resources to perform the desired task. Such admission is unfashionably pessimistic and smacks of doomsaying. We read editorials claiming optimistically that "every citizen should

be entitled to the best possible medical care." This is a nice sentiment, but is clearly an impossibility if the word "possible" is to be taken literally, for there are not enough "best" doctors to go around. Even if we defined a "best" doctor to be the one at the top of his medical school class, there would still not be enough to care for everybody. And if the writer implies that all doctors are equally good, so that anybody getting to a doctor is getting the "best" care, he is just as wrong. To make any sense out of the statement, we must substitute the word "feasible" for "possible". Similarly, economic theories that link prosperity to an infinitely continuing rise in the world's production rate must

eventually succumb to the reality that the earth has only a finite surface area and that nothing on it can increase indefinitely. We are rapidly becoming aware that materials and energy sources are not the limiting factors: we have enough resources to last several centuries at the present rate. But suddenly, here and now, we are discovering that the planet is not an infinite sink for the burial of wastes, and that the atmosphere is a delicate feedback system that can wreak havoc if loaded with too much carbon dixoide or hydrocarbons. Cities battle over wastedisposal sites, and sales of sun lotion soar as the ozone layer is depleted. In this book we are concerned primarily with impossibilities that violate the

fundamental laws of nature. As we have seen, the laws of denial enable us to pass quick judgment on complex proposals by answering one simple question. Any feasibility study for a new device should begin by asking: "Does this machine violate conservation of energy? Does it create energy from nothing?" If, upon investigation, we find that the answer is yes, then we need go no further. We know the machine will not work. If the answer is no, then more study is required, for so far all we know is that the machine might possibly work. We are not sure that it will. This is why we feel safe in saying that perpetual motion machines are an impossibility. The only problem here lies

in the failure of certain people to understand that a perpetual motion machine is any kind of device that violates conservation energy. Avoid silly people who give arguments like, "The motion of the earth going around the sun is perpetual, so therefore a perpetual motion machine is possible." The same logic can be used to show that it is a waste of time to pore through the tedious archeology of Worlds in Collision and make detailed criticisms concerning the historical data. One need only focus on the statement that the passing of the planet Venus close to the earth caused the earth to stop rotating on its axis and subsequently to start rotating again. "Stones fell from the heavens, sun and

moon stopped in their paths, and a comet must also have been seen."8 Venus simply cannot have stopped the rotation of the earth by means of its gravitational field. The gravitational force between two planets passes through the centers of the two spheres: it is a central force. The law of conservation of angular momentum assures us that central forces between two objects cannot change the state of rotation of either object. We need to go no further, except possibly to assure ourselves, by putting numbers into the appropriate equations, that tidal forces and magnetic forces are much too small to cause any such rapid change in rotation. (Tidal forces between earth and moon have been slowing the earth's rotation for millions of years: the effect is too small to notice

from one year to the next.) Furthermore, none of these forces could have restored the rotation of the earth once it had been stopped. It was this kind of knowledge knowledge that the events described in Worlds in Collision could not possibly have happened, no matter how many biblical references were quotedthat turned the world of science unanimously against Velikovsky. Two kinds of errors can be made when passing judgment on impossibilities: (1) saying that something is possible when it is actually impossible, and (2) saying that something is impossible when it is actually possible. Because the laws of

denial are so specific and precise, a detailed understanding of them reduces one's chances of falling into these errors. Throughout history some unwary people have claimed things to be impossible that later turned out to be very possible. Their fundamental mistake was in relying on the laws of permission, which, when coupled with a pessimistic turn of mind, can easily lead to false conclusions. The most pitiful members of this group are those who arrogantly predicted that man would never fly. One unforeseen result of this debacle was the spawning of the kind of optimist who now loves to pounce upon me whenever I say that something is impossible, intoning the obligatory incantation: "Look at all the people who

proved that flying is impossible. They were wrong, and so are you." In the first place, those who "proved" that flying was impossible did not really prove it. They may have thought they proved it, but that's another story. Their "proofs" used the laws of aerodynamics in a day when not enough was known to allow any kind of valid computation. They either underestimated the amount of power that could be obtained from an internal combustion engine, or the amount of thrust that could be obtained from a propeller, or the amount of lift that could be obtained from a properly curved wing. In any case, their calculations were not good enough to prove anything, and so they foundered on their own pessimism. Here was a classic

case of misguided skepticism, an excellent example of the rule that unrestrained pessimism linked with congenital skepticism can lead to error. The most blatant example of erroneous skepticism was that perpetrated by an editorial writer who, in discussing Robert Goddard's new proposal to use rocket propulsion for space travel, proclaimed the following in the middle of the New York Times' editorial page: As a method of sending a missile to the higher, and even to the highest parts of the earth's atmospheric envelope, Professor Goddard's rocket is a practicable and therefore promising device. . . . It is when one considers the multiple-charge rocket

as a traveler to the moon that one begins to doubt . . . for after the rocket quits our air and really starts on its longer journey its flight would be neither accelerated nor maintained by the explosion of the charges it then might have left. That Professor Goddard, with his "chair" in Clark College and the countenancing of the Smithsonian Institution, does not know the relation of action to reaction, and of the need to have something better than a vacuum against which to reactto say that would be absurd. Of course he only seems to lack the knowledge ladled out daily in high schools. 9 The editorial writer should have been more wary of the knowledge ladled out in his own high school, for he clearly

understood little of Newton's laws of motion, as well as of conservation of momentum and the mechanics of rockets. It never occurred to him that the rocket reacted against its own exhaust gases, and that the atmosphere had nothing to do with it. The example demonstrates that to make good predictions two things are needed: a firm knowledge of the fundamental laws of nature and knowledge of the way they are applied. The hapless journalist knew something about Newton's third law of motion (which in this case is equivalent to conservation of momentum), but he failed to identify the parts of the system that reacted against each other. As a reaction against such aggressive pessimism, a few years ago the world of

physics shifted into a period of unmitigated optimism. One began to hear profundities such as: "Anything in physics is possible as long as it is not specifically forbidden." This was supposed to encourage people to examine all the theories they could think of to explain new and strange phenomena that were being discovered, particularly in the world of particle physics. Psychologically the statement has had beneficial effects. It has encouraged researchers to open up their imaginations. However, examined dispassionately, the statement is merely a tautology. Of course anything that is not forbidden is possible. The trick is to know what is forbidden. In this book we have tried to focus attention on that aspect of reality.

Notes 1. M. A. Ijaz, "The Age of the Universe Is 16.427 Billion Years," Bulletin of the American Physical Society, 32, 4 (April 1987), p. 1124. 2. A. De Morgan, A Budget of Paradoxes (Chicago: Open Court, 1915). 3. S. Ostrander and L. Schroeder, Psychic Discoveries Behind the Iron Curtain (Englewood Cliffs, N.J.: Prentice-Hall, 1970), Chap. 16. 4. J. Randi, Flim-Flam (Buffalo, N.Y.: Prometheus Books, 1982), Chaps. 1 and 13. 5. Ibid.

6. Lewis Carroll, Through the Looking Glass, Chap. 6. 7. B. Bettelheim, The Uses of Enchantment (New York: Alfred A. Knopf, 1976). 8. I. Velikovsky, Worlds in Collision (New York: Pocket Books, 1977), p. 152. 9. New York Times, editorial page, January 13, 1920.

9. EPILOGUE A recurrent theme in this book has been the exponential growth in scientific knowledge during the past three centuries. In the midst of this growth certain features stand out. 1. Great revolutions (paradigm shifts) have taken place in our way of thinking about nature. Among these revolutions, the following are of the greatest importance: a. The change from the earth-centered to the sun-centered universe and then to a universe without center. This change was made possible by instrumentation and measurement procedures that allowed for the accurate location of planets, stars, and

galaxies in space. b. The change from no useful model of matter to the atomic and then to the present model based on fundamental particles and forces. This change was brought about by 19th-century discoveries in the theory of electromagnetism, as well as by developments in the technology of vacuum pumps, high-voltage generators, particle accelerators, and the broad array of particle detectors and computerized data analyses now used routinely in particle physics laboratories. c. The change from a view of particles as hard point-like things to the quantum model, in which particles have spreadout, wave-like properties. This change

was made possible by an interplay of mathematical techniques and numerous experiments demonstrating the quantum nature of matter (the two-slit diffraction experiment, measurements of atomic spectra, etc). d. The change from a Newtonian to an Einsteinian universe, in which there are no preferred frames of reference and the speed of light is the same to all observers. This change was made possible by optical and electronic techniques that enormously increased the precision of speed-of-light measurements. e. The change from a simplistic view of the world as a clockwork operating under rules in which one cause produces one

effect, which in turn causes another effect in a straight chain, to a model based on system and information theories. In this more sophisticated model, a system can have many inputs, all varying independently. The output signals may be fed back to the inputs of the system, causing behavior that cannot be explained by simpler theories. Negative feedback can produce self-regulating systems; positive feedback can produce oscillations or chaotic behavior. Information theory is the basis for the design of computers and automatic control systems, as well as for theories of intelligence. f. The change from vitalistic theories of life to those in which the fundamental

processes are biochemical and neurophysiological. This change resulted from laboratory techniques that made it possible to determine the biochemistry of living cells and the structure of the molecules making up living matter, to trace the transmission of nerve impulses within living organisms, and to determine in greater and greater detail how specific areas of the animal brain are associated with particular forms of behavior. g. The change from a theory of supernatural creation to the principle of natural evolution, which was helped along by modern methods of dating geological and paleontological specimens. The principle of evolution replaced vitalistic theories with the concept that life began

and evolved to its present forms as the result of natural interactions among the fundamental constituents of matter, without need of any intervention by supernatural forces. All of the above paradigm changes resulted from the interplay of abstract concepts and theories with improvements in instrumentation and laboratory techniques. Each new and improved instrumentation allowed an extension of human perception to larger or smaller domains: we can now photograph individual atoms and measure the distance of galaxies billions of light-years away. Refined instrumentation provided greater precision in measurements of the speed of light, the electric charge on elementary

particles, and the age of fossil specimens. Computerized data analysis made it possible to perform experiments that would have been unthinkable a generation ago, both in the identification of new elementary particles and in the analysis of large biological molecules. 2. During the past century certain principles, theories, and paradigms have remained invariant in the midst of change. The very fact that scientific knowledge increases exponentially implies that a certain proportion of what we know is permanent. Otherwise, it would not be possible for knowledge to accrete; we could not build one fact on another, one principle on another, one theory on another. If the foundations were unstable,

the structure would not hold. What has made the accretion of permanent knowledge possible is the establishment of philosophy of science as an interdisciplinary study, using the talents of both philosophers and scientists. Understanding of the principles of observation, evidence, and verification, along with the enormous advance of instrumentation, has made it clear what part of our knowledge is good and what part is less secure; furthermore, for every shaky theory, scores of skeptics are ready to criticize and bring forth new theories. The laws of denial, representing fundamental symmetries of time and space, stand out as scientific principles

that, once experimentally verified, are not going to change. These laws express the underlying structure of the universe; any change in them would require a change in the form of the universe at the deepest levels. Therefore, conservation of energy, momentum, angular momentum, and electric charge are with us permanently. Our knowledge of the structure of matter, based on observation, will not change. These comments should not be taken to mean that everything we think we know is cast in stone. Our knowledge may be extended to levels deeper than are presently accessible; particles more fundamental than the quarks may be discovered, and additional interactions weak or subnuclearmay be found. Some

details of current particle theories are known to be lacking. The present interpretation of quantum theory may be replaced by another that explains more of the observed facts. However, any new theory that arises must include the current one as a subset, for the current one has been experimentally verified more throughly than any other theory in history. No change in particle or quantum theory can lessen what we know from precise experiments about atomic and molecular structure. There may, however, be changes in how we interpret that experimental data. 3. By contrast, many disciplines outside the physical sciences have shown no growth in the quantity of verified

knowledge. There has been no accretion of facts or principles that all can agree upon. Though they continue to enjoy great popular interest, they lie outside the boundaries of science, because their subject matter is impervious to the methods of scientific research. They deal with phenomena that we label paranormal. Our quest for knowledge concerning the structure and behavior of matter has turned up no evidence of supernatural forces. No psychic energy, no elan vital has made its presence known to any investigators in biophysics or neurophysiology. The more we know about matter and energy, the less room there is for the supernatural. If we imagine the development of science to be a war of attrition between pragmatic

concepts of nature and fantasies of the supernatural, we can look back to 1828 as the date of a most important skirmish in that struggle. Prior to that time, most chemists had believed that organic and inorganic compounds were fundamentally different and that this difference was the elan vital possessed by organic compounds. Jons Jakob Berzelius, the leading chemist of the time, believed that organic compounds could be synthesized in the laboratory only with the assistance of living tissue. He felt that different laws might hold for inorganic and organic compounds, suggesting, for example, that the law of definite proportions might not hold for the latter.' In 1828, Friedrich WOhler refuted all

such notions when he found it possible to make crystals of urea by the simple process of applying heat to ammonium cyanate. Ammonium cyanate is an inorganic corn pound capable of being synthesized from elements that do not derive from living beings. Urea, an organic compound, had previously been considered obtainable only from urine excreted by live animals. Its creation from an inorganic compound broke the sacred line between the organic and the inorganic. Vitalism was found to be unnecessary for the science of chemistry. The vitalists merely retreated one step, however, and continued to insist that elan vital was necessary for life. The controversy persists to the present day.

Some modern biologists criticize reductionist thinking because they think that complex, living molecules obey laws different from those governing simpler, nonliving molecules. While it is true that living organisms behave in ways most simply described by high-order abstractions such as metabolism, information, volition, perception, and consciousness, the crucial question is whether these abstractions depend for their explanation upon fields, forces, or energies not available to simpler, nonliving matter. 4. Thus we find ourselves in the midst of a paradigm shift that is still incomplete. The change from vitalistic to biochemical and neurophysiological theories of life

remains to be fully accepted by the public and by government. The current struggle to define when life legally begins and when it ends is evidence of this continuing conflict. If concepts such as spirit and soul were not part of our everyday vocabulary, there would be less confusion over the definition of living human being. Since the nervous system produces the thoughts that define the conscious self and control the actions of the body, the presence of an operational nervous system should be the criterion for the existence of a human person within the physical body. The medical profession recognizes this principle when it accepts brain-death (the absence of an electroencephalogram signal) as sufficient cause for removal of life- supports. General agreement on this

point would lead to fewer arguments about when it is morally justified to allow the heart to stop beating. Some of the current arguments in the press concerning rights to life and death sound as though we still think of the persona as residing in the heart rather than in the brain. Considering the present status of biochemical knowledge, we can predict the coming of a new and crucial test for the theory of vitalism, a test that will determine whether elan vital is a necessary ingredient for living matter. This trial can be foreseen as the natural outcome of current research in molecular biology, now moving ahead so rapidly that issues considered unimaginable a few years ago are making routine headlines in

the scientific journals. As a portent of things to come, the U.S. Patent and Trademark Office recently decided to allow patents for new animal life forms created by gene-splicing techniques. Pro-lifers and other groups are already protesting this decision, appealing to the dignity and sanctity of life. However, several years previously, a ruling by the Court of Customs and Patents Appeals had stated that bacteria could be patented just like chemicals, and that the circumstance that one was alive and the other not alive was "a distinction without legal significance," a ruling upheld by the United States Supreme Court in 1980.2 As another example, Japan is beginning

the construction of an automated DNA sequencing center that will allow rapid determination of the sequence of nucleotide bases (thymine, adenine, cytosine, and guanine) making up the structure of DNA molecules. It is estimated that this plant would be able to run through a million bases per day, at a cost of 10 to 17 cents per base. Since DNA molecules are the backbones of the genes that determine plant and animal structure, the DNA sequencer would point the way toward establishing the structure of all the genes in a human cell. This task is the basis of a proposal currently under consideration by the U.S. Department of Energy and the National Institutes of Health.3 The proposal aims to initiate a billion-dollar, multiyear effort to map and

sequence the human genome (the complete set of chromosomes in a human cell). Before the mapping could even begin, however, a methodology would have to be developed. It would be a gigantic task the Manhattan Project of biology. Considering the onrush of events, it is not foolhardy to predict the imminent synthesis of living matter in the laboratory. Just as Wohler saw crystals of urea forming in his flask, some biologist, within the next few decades, will see small specks of living organism in his reaction chamber. It may be no more than a single small cell, but it will be able to reproduce, to metabolize food for energy, and will pass all the tests for a living being. Most importantly, it will have been

put together from elements that had never been part of another living organism. Once that has been accomplished, very little room will be left for vitalism. A number of predictions can be made about the events that will follow. 1. The vitalists will continue to retreat. They will say, "Well, perhaps a virus or a bacterium does not need elan vital. But certainly human beings do. Humans are different because a human being possesses a soul." This kind of argument is already being practiced in the area of artificial intelligence. Every time a computer does something that looks like thinking and

which formerly only humans were able to do, the vitalists say, "Oh, that computer is not really thinking. It's just stepping through an algorithm." When a computerized chess player beats all but the very top chess champions, they say, "Well, the computer is just scanning the moves mechanically. It's not using heuristics, and so what it's doing is not thinking." the word "thinking" keeps being refined and redefined as computers invade its territory more and more deeply. Similarly, artificial intelligence programs are criticized for not being intelligent enough. They do not have the integrative powers of truly expert human beings. In the same manner, synthesis of living matter will force vitalists to revise their

concept of life. We may expect arguments to the effect that life formed in a test tube is not really life, even though it may look like life and pass all the usual tests, because it lacks the divine spirit instilled by means of a supernatural origin. Others may try to save vitalism by insisting that at the moment living matter is created in a test tube, it receives an injection of vital spirit from on high, like Dr. Frankenstein's creation being zapped by the lightning bolt. 2. As the pressure on vitalists and religionists mounts, so will their legal and political backlash. The current court battles surrounding the teaching of evolution are small precursors of the fear and rage that will explode when living

matter is created in the laboratory. We can foresee legislative attempts to bar creation of life, just as today efforts are being made to prevent the use of recombinant DNA research and to bar surrogate motherhood. Violent demonstrations will surround and invade research laboratories. Present actions by antivivisectionist and anti-abortion groups may be viewed as dress rehearsals for future clashes. Biologists may be ejected from their churches. Funds will be pulled from universities engaged in forbidden research. Cries of: "There are things man was not meant to know" will resound through the land. Will there be future Giordano Brunos and Galileo Galileis? Will there be lynchings

and burnings? The answer depends on the degree of hysteria generated. It depends on the proportion of those addicted to fantasy to those willing to accept reality for what it is. It also depends on the effort scientists are willing to put forth to educate the public and to prepare them for future developments in the life sciences. Most of all, it depends on the willingness of the churches to be leaders in acceptance of a rational philosophy, just as they now accept the teachings of Galileo. It can be predicted that the more fundamentalist the church, the more intransigent its opposition to the new discoveries will be. Another current scientific trend that will challenge deeply held beliefs is the

burgeoning of artificial intelligence. Development of a computer that can truly think would demonstrate that dualistic mind- body theories are not needed to explain thought; in addition, theories of telepathy, of ESP, and of other psi phenomena would be fatally weakened. The topic of artificial intelligence has been part of computer science long enough for everybody to realize that the goal of true intelligence is far away. We can write programs that exhibit dumb, routine intelligence, but we cannot yet write programs that exhibit true expertise, creativity, and genius. We can write programs that compose music the way a mediocre composer does

by imitation of style. Our programs can imitate Mozart, but they cannot be Mozart, for that takes geniusthe ability to create and surprise. On the other hand, if we knew what genius was, perhaps we could write a program for it. Computer science is such a new field and the amount of knowledge generated by it has exploded so rapidly, that it is impossible to predict how far we will be able to go in creating computers that think. In this area we really do not know what is impossibleexcept in dealing with purely physical questions such as how many transistors can be put into a certain amount of space. If we are ever able to make a computer that really thinks, we will be able to dispense with psychic energy or other

vitalistic entities as explanations for human mental processes. One test for whether a machine can really think is the so-called Turing test, which puts the machine on one side of a wall and a human on the other side. The human asks the machine questions, and the machine answers. If after some length of time the human cannot decide whether the thing on the other side of the wall is human or not, then the machine passes the test. Vitalists will complain that since computers do not have self- awareness, then humans have something that computers do not. The direction of retreat, then, will be to insist that self-awareness is the human feature that requires psychic energy or spirit. At this point we reach the

ultimate problem of science: to understand the mechanism of self-awareness and to devise a Turing-like test to determine when a computer has it. A final prediction, made warily, is this: If and when scientists are able to synthesize living organisms, then as these artificial organisms become more and more complex, the vitalists will continue to be pushed into a corner. Furthermore, as we continue to learn more about the nature of the human brain, concepts of soul, vital spirit, and psychic energy will become more and more superfious. However, will they then fade away until they become like the smile on the face of the Cheshire cat? Realism requires us to predict that the general public will be indifferent to this

knowledge and will continue to believe as it always has. The gulf between science and the public will widen. The great sticking point, to most, will be the question of meaning. A universe without meaning seems pointless. If humans are nothing but collections of atoms operating without consciousness, then what is the purpose of it all? A supernatural creator appears necessary to provide meaning for the entire enterprise. On the other hand, if it can be proved that human consciousness arises strictly from the actions of a nervous system operating under the laws of nature, then the question

of meaning collapses to a very simple proposition: we, as humans, provide our own meaning. Meaning exists only in our own minds. The meaning of the universe is whatever we want it to be. The meaning of our own lives is whatever we make of it. In this way we take total responsibility for our actions. We cannot put the blame on higher powers for anything we do. "The devil made me do it" is no longer an excuse. For those who derive comfort from contemplating the sweet pleasures of mysticism, the thought of a universe without supernatural forces may seem dreary. However, for those who are able to take part in the discovery of nature, the

greatest thrill lies in being able to understand what really exists, in exploring the structure of matter and energy, in understanding the nature of living things, and in solving the mysteries of human thought and feeling. As appealing as the mysteries of the occult and the supernatural may be, solving the mysteries of the real universe is ultimately more rewarding. Notes 1. I. Asimov, Asimov's Biographical Encyclopedia of Science and Technology (New York: Doubleday, 1972), p. 304 2. New York Times, April 21, 1987, editorial page.

3. "Agencies Vie over Human Genome Project," Science, 237 (July 31, 1987), p. 486.

Appendix WHY THINGS CAN'T GO FASTER THAN LIGHT 1. The unique and constant speed of light Many science fiction stories begin by assuming the possibility of fasterthan-light (FTL) travel, accomplished by one means or another. After that, no thought is given to the consequences of the assumption. However, this gets the authors into trouble with Einstein's principle of relativity, which, for a number of reasons, forbids spaceships to travel faster than the speed of light. (On the other hand, without FTL travel a large fraction of science fiction would be wiped out; this is the

fundamental tradeoff.) A few authors (notably Ursula LeGuin in The Left Hand of Darkness) try to apply the principle of relativity properly to the motion of their spaceships. As a result, their ships travel more slowly than light. The relativistic time dilation is then used to let the ship travel over interstellar distances in a reasonable time as measured on board the ship. (The elapsed time measured on the fast-moving ship is less than the time on earth or other planets.) But simultaneously (and this is where the logic slips), some of these authors assume that somehow signals can be transmitted from the moving ship at a speed faster than that of light, providing rapid communication over stellar

distances. The ideal is instantaneous communication from any point to any other point in space, preferably using telepathy, ESP, psi, or other psychic phenomena. In this Appendix, I will show that traveling faster than light or sending signals faster than light can generate situations so paradoxical that we are forced to conclude that transmission of any matter, energy, or information faster than the speed of light is impossible. Before we can understand the arguments leading to this conclusion, however, we must first see why the speed of light is so special.

Einstein's principle of relativity is based on the observation that the speed of light is the same to all observers. That is, anybody who measures the speed of light will get exactly the same number, no matter how fast he is moving or how fast the source of the light is moving. For this reason we call the speed of light an invariant quantity. Its value (as of June, 1982) is 299,792,458 meters per second. The invariance of the speed of light has some remarkable consequences. For example, suppose that science-fiction character Richard Seaton, on board Skylark III, is traveling toward the star Capella at one-half the speed of light, while John Star, on Spaceship Orion, is traveling away from Capella, also at one-

half the speed of light. An observer on each ship measures the speed of the light received from Capella and finds it to be 299,792,458 meters per second, even though one ship is traveling toward the source of the light and the other ship is traveling away. This speed is the same that would be measured if the ships were both at rest relative to the star emitting the light. Even more surprising, perhaps, is that John Star finds that a light beam sent to him from Skylark III also travels with the normal speed of light (which we denote by the symbol c), even though the two ships are moving toward each other. At the same time, the captain of the Skylark, measuring the speed of the light beam he

himself is transmitting, finds that it travels with the same velocity. Everybody gets the same number for the speed of light. To someone familiar only with classical (pre-Einsteinian) concepts, this is an unsettling phenomenon. Sound waves do not behave like this at all. The speed of a sound wave depends not only on the velocity of the source relative to the air, but also on the velocity of the observer doing the measuring. An airplane pilot measuring the speed of sound transmitted to him from another airplane will measure different speeds depending on where the other plane is located, be it in front or to the side. But the speed of light depends neither on the velocity of the source nor on that of the observer. Any observer of a

light beam will measure the same speed, regardless of the source. (The implication here is that we are dealing only with light traveling through a vacuum.) The statement that the speed of light is a constant is not just a theory. It is based on a large number of experimental observations, which in recent years have become extremely precise. It must be understood that simple measurements of the speed of light are not enough to prove that this speed is a constant. This is because measurements of light speed are usually made by finding the time it takes for the light to travel back and forth over a closed path. Averaging over the two directions of travel washes out most of the change that would occur if the speed of

light depended on the motion of the source or measuring apparatus. Therefore, measurements having to do with the constancy of the speed of light are called "second-order" experiments, and usually involve comparison of the speed of two light beams traveling in different directions relative to the motion of the earth in its orbit. The classic experiment of this kind, the Michelson-Morley experiment, was performed about 1887, a century ago. Since many descriptions of these experiments exist in the literature,1,2 we will not go into their details at this point. It should be noted, however, that a Michelson-Morley type of experiment by

itself is not sufficient to prove the constancy of the speed of light. In 1949, the well-known relativist H. P. Robertson of the California Institute of Technology proved that three independent types of experiments are needed to nail down the proposition that the speed of light is absolutely constant. These experiments test the following three hypotheses: 1. The total time required for light to traverse a given distance and then to return to its origin is independent of the direction of travel of the light beam. 2. The total time required for a light beam to traverse a closed path is independent of the motion of the source and of the observer.

3.The frequency of a moving light source (or radio transmitter) is altered by a time dilation factor that depends on the velocity of the source relative to the receiver. All three of these hypotheses have been verified by hundreds of experiments during the past century. The MichelsonMorley experiment tested only the first hypothesis. The Kennedy-Thorndike experiment (1929) was the first to test the second, and the Ives-Stillwell experiment (1938) was the first to test the third. One traditional way of rating the accuracy of such experiments has been to refer to the old (and discredited) idea that light consists of vibrations in a hypothetical fluid, the ether, which is supposed to fill

all of space. The experiments we have been discussing were intended to detect the motion of the earth through the ether, but every one has failed to find such a motion. We describe the precision of these experiments by specifying how much motion through the ether (ether drift) they could detect if such a motion existed. The original Michelson-Morley experiment was precise enough to detect an "ether drift" of about 1 kilometer per second, while the actual velocity of the earth about the sun is 30 km/ sec. Since no drift was detected, the experiment was considered to have had a negative result. Modern techniques using laser beams are many times more precise than those used in the older experiments. A 1964

experiment by Jaseja, Javin, and Townes was capable of detecting an ether drift of less than 1/ 1000 of the earth's velocity, and none was found. Fifteen years later, Brillet and Hall improved on that accuracy by a factor of 4000.3 That is, their apparatus was capable of detecting a motion of the earth relative to the ether four million times slower than the earth's actual velocity around the sun. None was found. These are second-order experiments, which detect variations in the square of the speed of light. The Brillet-H all experiment thus demonstrates that the speed of light is constant to within two parts out of 1014, making it one of the most precise experiments in the annals of

physics. I have discussed the precision of these experiments in such detail in order to show that when we say that the speed of light is a constant, we are not just talking theory. We are basing our statement on a set of measurements that dovetail into a closely knit logical system of the most extraordinary completeness. Furthermore, the speed of light is unique in that nothing traveling at any other speed can have the same property of invariance. No other velocity can be the same to all observers. This uniqueness follows mathematically from the Lorentz transformation equations.

Once it is established that the speed of light is a constant, then the rest of Einstein's special theory of relativity follows. Indeed, modern treatments of relativity start with the postulate that the speed of light is one of the natural constants of the universe (like Planck's constant and the charge on the electron) and deduce the rest of relativity from that.3 The fact that all of nature is observed to obey the equations of relativity is proof of the original postulate. Many consequences of relativity are well known: the mass increase, the contraction of space, effects on electric and magnetic fields, etc. What I want to discuss for the remainder of this Appendix is the effect of relativity on the concept of time

especially on the concept of simultaneity. 2. Simultaneity and the time dilation What relativity does to time is the hardest part of the theory to understand. Whenever people get into trouble with relativity, it is almost always because they do not fully understand the part of the theory that relates to time. Relativity studies the relationship between events in space and events in time. We speak of an event as taking place at a certain point in space and at a certain instant in time. Two events are simultaneous if they occur at the same time. This definition of simultaneity seems unambiguous, but even such an elementary

concept becomes very mysterious when we look at it relativistically.

FIGURE 13. Light flashes from ships A and C are seen simultaneously on ship B, and also by the observer on Earth. Ship

B claims that the light flashes were emitted simultaneously from A and C. Intuition tells us that if space traveler Richard Seaton sees two events happening at the same time, then John Star, on another ship, ought also to see the same two events taking place simultaneously. One of Einstein's great talents, however, was his ability to prove intuition wrong by following simple mathematics to its logical conclusions. The most remarkable feature of Einstein's relativity was the counter-intuitive fact that two events that are simultaneous to one observer are not necessarily simultaneous to another observer. With this discovery, Einstein overturned everybody's ideas of time.

We can demonstrate very simply how this strange state of events comes about. Consider three spaceships, spaced a few million kilometers apart, passing Earth at some high speed (Figure 13). The exact distance between the ships doesn't matter. Say that Kimball Kinnison is in Spaceship B. He has made measurements that tell him spaceships A and C are equally distant from his own ship. Now A and C each explode a bomb at the same instant. Kimball Kinnison knows that they both explode at the same time because the light from both explosions reaches him at the same time and he knows he is halfway between A and C. That is how Kimball Kinnison defines simultaneity in his reference frame.

FIGURE 14. The observer on Earth reasons that ships A, B, and C must have been in the position shown in the figure at the time the light flashes were emitted, to allow time for the light to reach Earth. Since the light from A must travel a greater distance than the light from C, and since both light flashes travel with

the same speed, the light from A must have been emitted before the light from C. On the other hand, consider John Star on Earth. Spaceship B passes Earth just as the light flashes from the two bombs reach Earth. So John Star sees these two flashes at the same time Kimball Kinnison does. He sees the flashes simultaneously. But does he say that the two bombs exploded at the same time? Not if his reasoning is correct. John Star starts out by saying: Both explosions give off light that travels through space at the same speed. That's the one thing I know for sure. Now, this light will take some time to reach Earth.

Therefore, when the bombs explode, the three ships must be in the position shown in Figure 14. We see that the light from bomb A has farther to travel than the light from bomb C. But if the two flashes reach earth at the same time, then bomb A must have gone off before bomb C! (Remember both light pulses travel at the same speed.)

FIGURE 15. An observer in a spaceship measures the time required for a light flash to travel from a source to a mirror and then back to a detector. Because the speed of light is a constant, we find that Kimball Kinnison and John Star inexorably disagree about the timing of the bombs. One of them says they both went off at the same time; the other says bomb A went off before bomb C. Time has gone awry. In the jargon of relativity, we speak of Kimball Kinnison as being in one reference frame, while John Star is in another. The two frames are moving relative to each other. One of the chief functions of relativity theory is to predict

how events taking place in one frame appear to people in the other frame. What we have just shown is that two events that are simultaneous in one frame are not necessarily simultaneous in another that is in motion relative to the first frame. (For two events to appear simultaneous in both frames, they would have to be located at the same point in space.) Another distortion of time is demonstrated by a different experiment. Suppose Kimball Kinnison sets up a light source, a detector, and a mirror, as shown in Figure 15. He flashes the light and measures the time it takes for a short pulse to go from the source to the mirror and back. Let us say that the time is one microsecond. What does John Star see? He is standing on

Earth as Kimball Kinnison's ship flashes by, and the path of the light pulse looks to him as shown in Figure 16. The drawing shows not three different ships but the same ship as seen at three different times while it moves past Earth.

FIGURE 16. An observer on Earth

observes the same light flash as in Figure 15. Because the ship is moving, he sees the light flash travel farther than the observer on the ship. However, both see the light traveling with the same speed, so the observer on Earth concludes that the light takes a longer time to make the round trip than does the observer on the ship. Now, as Star observes, the light path from source to mirror to detector is much longer than it appeared to Kinnison standing in his ship. But, remember that the light travels with the same speed regardless of the observer. So if Star sees it traveling over a longer path, it must be taking a longer time to traverse that path. But Star and Kinnison are measuring the

time interval between the same pair of eventsemission of the light flash from the source, and arrival at the detector. We see, then, that the time between these two events depends on the motion of the observer relative to the events. Kinnison says the light flash takes one microsecond to go from source to detector. Star says it takes a longer time let's say five microseconds. This means that Star's clock makes five ticks for every one tick of Kinnison's clock. Star says Kinnison's clock is running slow compared to his own. This effect is the famous time dilation the slowing down of time in a moving reference frame. The time dilation is a necessary consequence of the fact that the speed of light is a

constant. A formula for the time dilation can be derived from Figure 16 using nothing more than the Pythagorean Theorem. Now we are in a position to ask some interesting questions about communications to and from moving spaceships. Assume a situation common in many science fiction stories: Kimball Kinnison is scooting along in his ship at one-half the speed of light, engaging in a conversation with headquarters back on Earth, using a device that provides instantaneous communication between any two points in the universe. First of all, we ask: What do we mean by instantaneous communication? We mean that signals are transmitted and received at the same

instant; that is, transmission and reception are simultaneous events. But we just showed that simultaneous is all in the eye of the beholder. What is simultaneous to Kimball Kinnison in his ship will not be simultaneous to the people on Earth. Communication that loOks instantaneous in one frame will not be instantaneous in another. So what does instantaneous communication mean? We will come back to this question shortly. 3. The geometry of spacetime A set of equations known as the Lorentz transformations allows us to find out what

is happening in one reference frame if we know what is happening in another. That is, if we know the position and time of an event in a moving spaceship, the Lorentz transformations give us the position and time of the same event as seen by an observer on Earth. Hendrick Antoon Lorentz, a Dutchman, was one of the giants of 19th-century physics. He discovered the equations that bear his name while studying other equations that describe the properties of electromagnetic waves. The Lorentz equations are the same as those that Einstein derived while developing the special theory of relativity.

They are named after Lorentz because Lorentz developed them first. The irony of this situation, however, is that Lorentz never completely understood his equations. If he had, he would have been the inventor of relativity. But Lorentz never believed what the equations told him about timethat time could be different to two observers. He was bound by classical ways of thinking, according to which time is the same for all observers. It was Einstein's ability to break away from this classical thinking that characterized his peculiar genius. He was aided by the brilliant mathematician Hermann Minkowski, who originated the concept that relativity could best be understood by thinking of space and time

as a single entitythat is, as a space-time continuum. In other words, instead of describing the universe by three dimensions of space and completely separate dimension of time, we now, following Minkowski, conceive of it as a four-dimensional spacetime. In this new worldview, time is one of four dimensions, on an equal footing with the three spatial dimensions. I emphasize that "equal footing" does not mean that time is the same as space. I merely mean that time is treated the same as space mathemetically. To show where events are located in this four-dimensional universe, we use a spacetime diagram, such as Figure 17. We

will be describing a story in which Captain Spock is traveling at high speed away from Earth and has an encounter with a Klingon ship. The horizontal axis of the diagram shows the distance (in light years) away from the starting point, Earth. The vertical axis is a time scale (in years). There can also be a y-axis and a zaxis, but these are omitted from a twodimensional drawing. Any point on this diagram represents the location of an event: how far it is from Earth and at what time. In Figure 17 we have located Earth at x = 0. The time when all clocks are set to zero is labeled t = 0. As time passes, the position of Earth advances upward on the time axis. Note that the x-axis is the

location of all the places where t = 0 in the Earth's frame.

FIGURE 17. A spacetime diagram, showing the trajectory of a light beam, a ship traveling more slowly than light, and an object traveling faster than light. In Figure 17 we also show Spock's spaceship going away from Earth at onehalf the speed of light, traveling along the line labeled SHIP TRAJECTORY. The ship has passed by Earth at zero time (t = 0). Its position is shown after 10 years (Earth time) have elapsed and it has traveled 5 light-years through space. (In this discussion, all ships travel with constant speed.) A beam of light projected from Earth at zero time goes a distance of one light-year in a time interval of one year.

Accordingly, it travels along a 45 path represented by the dashed line. A ship traveling faster than light would move along a trajectory lying below the light path. In this article we are not going to consider FTL spaceships, but anything we say about signals traveling faster than light also applies to spaceships or any other kind of object. We are dealing here only with spacetime geometry and not with the nature of any particular moving thing. The diagram of Figure 17 is the universe as seen from Earth. It is the Earth's reference frame, and is an ordinary cartesian coordinate system, with axes perpendicular to each other. Now let us see what the ship's reference frame looks like. You must understand, of course, that

the people in the ship feel themselves to be standing still. It appears to them that the rest of the universe is moving. So spacetime, to Captain Spock and his crew, is also rectangular. However, when the inhabitants of Earth look across to the moving ship, they see its spacetime grid altered. Figure 18 shows how the ship's coordinate system appears at the instant the ship passes Earth, when both Earth and ship are at the origin (x = 0 and t = 0). The ship's grid is distorted. Its x-axis is leaning up, and its taxis is leaning to the right. The two axes are no longer perpendicular to each other. We call the ship's axes x' and t'. Remember that the x-axis represents all the places where time is zero in a

reference frame fixed on Earth. Likewise the x'-axis represents all the places where time is zero in a reference frame attached to the moving ship. We are able to plot the axes in Figure 18 by using the Lorentz transformation equations. These allow us to calculate the coordinates x and t of an event in one frame of reference if we know the coordinates x' and t' of the same event in another frame (or vice versa). Here x and t represent the position and time of an event as seen by Earth observers, while x' and t' represent the position and time of the same event as seen by Spock, in his ship moving with velocity v relative to earth.In the example qe are to consider, the spaceship is moving at one-half the

speed of light (c) so v=0.5c or v/c=0.5

FIGURE 18. A spacetime diagram, showing the distortion of spacetime in a moving frame of reference. Observers on Earth see space and time represented by the horizontal (x) and vertical (t) axes. However, space (x') and time (1') relative to the moving ship are represented to Earth observers by the tilted lines. As a result, an event that occurs at zero time according to people on the ship takes place five years later according to observers on Earth. A common factor in the transformation equations is the quantity: F= [1 - (v /c)2]-1/2, which in this example has the value 1.155.

The Lorentz transformation equations are, then: t' = F(t - vx/c2) (1) and x' = F(x - vt) (2)

while in the other direction they are: t= F(t' + vx'/c2) (3) and x = F(x' + vt') (4) In working with these equations, we use years (y) for units of time and light-years (Ly) for units of distance.

The speed of light then becomes, conveniently, c = 1 Ly / y. The ship is located at the origin of its own coordinate system, so the position of the ship is given by x' = 0. Putting this value into Equation (2) we find, to no one's surprise, that x/t = v, the velocity of the ship in the Earth frame, while on the spacetime diagram, the quantity: t / x= 1/v =2y / Ly Similarly, we get the x'-axis by setting t' = 0 in Equation (1), from which we find t / x= v/c2 =0.5y / Ly This equation gives the slope of the

synchrony line, which represents all the points that exist simultaneously in the ship's reference frame, as seen from the Earth's frame. Now suppose we blow up a Klingon ship at point A along the x'-axis, some distance away from Spock's ship. This explosion takes place at zero time according to the ship's clock (t' = 0), but it is not zero time according to Earth's clocks. We can calculate what the Earth time is by using the Lorentz transformation equations. We will put in the following numbers: Spock's ship is just passing Earth and traveling at one-half the speed of light. The Klingon ship is 10 light-years away from Earth when it is blown up (x = 10 Ly

in Earth's frame). We use Equation (4), setting t' = 0. We then find that x' = x/F=10Ly/1.155=8.66 Ly The time t in the Earth frame can then be found by substituting t' = 0 and x = 10 Ly into Equation (1). The result is t = 5.0 y, in agreement with the Klingon ship's position on the t-axis in Figure 18. What we have found is that the explosion of the Klingon ship takes place at zero time and at a distance of 8.66 light-years according to the observations of Mr. Spock. But, according to the observers on Earth, this same event occurs 5 years after zero time and at a distance of 10 lightyears. Both space and time are

transformed. Mr. Spock finds that the distance to the Klingon ship is less than the 10 light-years measured by the earthbound observers. He finds, as a result of his motion, that space is contracted relative to the space experienced by Earth, in accordance with the famous Lorentz-Fitzgerald contraction. This story also illustrates our previous remarks concerning simultaneity. All events happening along the line marked t = 0 (the x-axis) are simultaneous according to the Earth observers. But according to the people on the ship, it is the events happening along the line marked t' = 0 that are simultaneous. And these are a different set of events. So the people on Earth and the people in the ship disagree about what

is meant by the word "simultaneous." To the people in the ship, the blowing-up of the Klingon ship is simultaneous with the instant their clock hits zero as they pass by Earth. To the people on Earth, the explosion happens when their clock reads 5 years. However, the people on Earth don't actually see the explosion until the flash reaches them 10 years later, when their clock reads 15 years. So precisely what do we mean when we talk about sending a message instantaneously? An instantaneous message is transmitted and received simultaneously. But if the sender and receiver are moving relative to each other, then it is not possible for the transmission and reception of the message to be

simultaneous for both the sender and the receiver. If the travel time is zero for one, it is not zero for the other. Immediately we are in trouble. There is a way out. We can make the following rule: Let the transmitter decide on the meaning of instantaneous. Suppose, for example, we send a message through a wormhole from one part of space to another. The wormhole has to pass along some particular path in spacetime. The simplest way to do it is to say that the wormhole will travel parallel to the xaxis in the reference frame of the transmitter. So a wormhole projected from Earth will go along the line t = 0 (or any line parallel to t = 0), while a wormhole

projected from the moving spaceship will go in a direction parallel to the line t' = 0. The wormhole, as you see, is simply a device to ensure that a message traveling along it goes from one end to the other simultaneously. Let us now see what consequences this assumption generates. 4. Consequences of instantaneous communication Consider the following scenario: 10 years (Earth time) have passed since Spock's ship left the vicinity of Earth. Figure 19 shows how things are arranged now. Earth has moved up the time axis to the 10-year point. The ship has moved along its

trajectory a distance of 5 light-years at one-half the speed of light, according to Earth measurements. But the passengers on the ship see things differently. They agree that they are going at a velocity of one-half the speed of light. At least, they see Earth receding from them with that velocity. However, they do not agree on the length of time they have been traveling. The Lorentz transformations allow us to relate ship time to Earth time. In the ship, x' = 0, so that Equation (3) can be used to find the ship time (t') by letting t = 10 years: t' = t/F=10y /1.155=8.66y The distance from ship to Earth can be found by

x' = vt' = (0.5Ly/y) . 8.66y = 4.33 Ly Thus, Mr. Spock and his passengers on the ship are of the opinion that they have gone a distance of 4.33 light-years in a time of 8.66 years, an example of both space and time contraction. Now, suppose the people on Earth project a wormhole out through space to the ship. And let us assume that messages can be transmitted by radio (or laser beam, or whatever) through this wormhole. The wormhole, as we agreed in the last section, transmits messages instantaneously from one place to another. Or, let's say, it tunnels through space, starting from the Earth's position, and emerges 5 light years away at the same

instant of time. Hence the name wormhole. Its path is represented by the arrow going from Earth to ship, parallel to the x-axis. Just to make things specific, let's assume that the ship passed Earth on January 1, 2100. This is when all the clocks were set. A message leaves Earth 10 years later, on January 1, 2110, passes through the wormhole, and arrives at the ship at the same time, according to the Earth point of view. But on the ship, the message is received at the 8.66-year point. If Earth is sending news of New Year's Day, 2110 A.D., the ship will receive that news in August 2108, ship time. It is as though the message has gone backward in time while going from Earth to ship.

FIGURE 19. A ship traveling at one-half the speed of light passes Earth, at which time the clocks are set to zero. Ten years later a message is sent instantaneously (in the Earth frame) to the ship. The ship sends a reply instantaneously (in the ship's frame) to Earth. The reply reaches Earth 2.5 years before the original message was sent. Now what happens if the ship replies to this message? Before we can answer this question, we must decide precisely how this reply is to travel. Would it be possible, for example, simply to have the ship send its reply straight back through the Earth's wormhole, so that the reply would come back simultaneous with the original message? After all, if the pipe has

two ends, there is nothing to forbid it working equally well in both directions. Or is there? It turns out that there is a very powerful argument against this scheme, simple though it may sound. If we use the wormhole as a two-way tube, the message from Earth to ship travels toward the past from January 2110 to August 2108. The message from ship to Earth, on the other hand, travels toward the futurefrom August 2108 to January 2110. There is a differencean asymmetrybetween the message and its reply, between Earth and ship. Imagine what would happen if there were

ten ships out there, all going away from Earth. Earth sends wormholes out to all ten ships and knows its messages go toward the past to reach these ships. The ships, on the other hand, have to send their messages toward the future to reach Earth through the same wormhole. The Earth can now say: I am unique, because I am the only one whose messages go toward the past. As a result, Earth can claim that it is absolutely at rest, while it is the ships that are in motion. But this statement violates the most fundamental postulate of relativitythe idea that there is no privileged reference framethat no frame can be considered absolutely at rest. If we say a ship is moving relative to Earth, we must be able

to say with equal truth that earth is moving relative to the ship. There should be no way to tell the difference. And so communication between Earth and ship via wormhole must have symmetrical properties: If the message from Earth to ship goes toward the past, then the reply message from ship to Earth must also go toward the past. This is the only way the fundamental principle of relativity can be satisfied. Now we continue with our original argument. We are faced with the following situation: In August 2108 (ship time), Mr. Spock receives a message sent from Earth on January 1, 2101 (Earth time). Mr. Spock now wants to send a reply back to

Earth via the ship's own wormhole. How does the ship's wormhole operate? If we play the game according to consistent rules, a message originating from any reference frame must travel along a path that requires zero transmission time within that frame. Thus, the message must go instantaneously from ship to Earth according to the ship's clock. That means that the wormhole projected from the ship must lie along a line that represents t' = 8.66 years on the spacetime diagram. This line will pass through the ship's position and be parallel to the line labeled t' = 0. Since all points on this line represent the same time, we may call this the ship's synchrony line. When does a message from the ship reach Earth? We

can find this Earth time (1) by using Equation (1), knowing that on Earth x = 0, while on the ship t' = 8.66 years: t= t'/F= 8.66y /1.155 = 7.5 y We have obtained a remarkable result. A message transmitted from the ship in August 2108 (ship time) will reach Earth at the 7.5-year pointthat is, in July 2107. See what we have done. The original message left Earth in January 2110 and the reply reached Earth in July 2107two and a half years before the original message was sent. Here we have all the ingredients for a classic time-travel paradox, for while the characters

themselves have not traveled through time, sending messages into the past creates an equivalent paradox. Here is the kind of thing that can happen: Sometime in 2109 a disaster occurs, say the assassination of a president. In 2110 a message is sent to the spaceship telling them of this event. The ship immediately sends a message back to Earth, telling them all the details of the assassination. Since Earth receives the message before the assassination takes place, the authorities are able to apprehend the perpetrator before he fires the shot. But then the assassination never takes place, so no message is sent to the ship. So no warning is received by Earth and the president is killed. So a message is sent to

the ship and the assassination is prevented. And so on .. . This is a true paradox. If an event takes place, then it does not take place. This is an impossible state of affairs. An event cannot take place and not take place. Either it happens or it does not happen. Therefore, the entire chain of events is an impossibility. This paradox occurs not only for instantaneous communication, but for all faster-than-light communication, as well. Furthermore, it does not depend on the use of a wormhole, which was added to this tale simply for the sake of color. More detailed calculations show that the same result is obtained for any kind of signal

that travels faster than the speed of light. Under such circumstances you can find a range of velocities for the spaceship that will create the paradox. The fact that any kind of FTL communication makes it possible to create a paradoxical situation is sufficient cause to believe that fasterthan-light communication is impossible. Notes 1. M. A. Rothman, Discovering the Natural Laws (New York: Doubleday, 1972), Chap. 7. 2. R. Resnick, Introduction to Special Relativity (New York: Wiley, 1968), Chap. 1.

3. E. F. Taylor and J. A. Wheeler, Spacetime Physics (New York: W. H. Freeman, 1966).

You might also like