You are on page 1of 31

Back to Edge Index

| Home | Edge Features Archive | Edge Index | Edge In The News | Third Culture |
Digerati | About Edge |

Edge 70 June 15-19, 2000


(23,174 words)
[Excerpts from this edition of Edge are being simultaneously published in
German by Frankfurter Allgemeine Zeitung [ Frank Schirrmacher,
Publisher.]

THE THIRD CULTURE

THE SECOND COMING A MANIFESTO


By David Gelernter
Everything is up for grabs. Everything will change. There is a magnificent sweep of
intellectual landscape right in front of us.

THE REALITY CLUB


Stewart Brand, David Ditzel, John C. Dvorak, Freeman Dyson, George Dyson,
Douglas Rushkoff, Rod Brooks, Lee Smolin, Jaron Lanier, David Farber, Danny Hillis,
Vinod Khosla, John McCarthy on "The Second Coming - A Manifesto" by David
Gelernter
Marc Hauser, Milford Wolpoff, V.S. Ramachandran, and Nicholas Humphrey on V.S.
Ramachandran's "Mirror Neurons and imitation learning as the driving force behind
"the great leap forward" in human evolution"

THE THIRD CULTURE


THE SECOND COMING A MANIFESTO
By David Gelernter

Introduction by
John Brockman
David Gelernter .....
"...prophesied the rise of the World Wide Web. He understood the idea half a decade
before it happened." (John Markoff)
"...is a treasure in the world of computer science...the most articulate and
thoughtful of the great living practitioners" (Jaron Lanier)
"...is one of the pioneers in getting many computers to work together and cooperate
on solving a single problem, which is the future of computing." (Danny Hillis)
"...is one of the most brilliant and visionary computer scientists of our time." (Bill
Joy)
Yale computer scientist David Gelernter entered the public mind one morning in
January '92 when The New York Sunday Times ran his picture on the front page of
the business section; it filled nearly the whole page. The text of the accompanying
story occupied almost another whole page inside.
In 1991 Gelernter had published a book for technologists (an extended research
paper) called Mirror Worlds, claiming in effect that one day, there would be
something like the Web. As well as forecasting the Web, the book, according to the
people who built these systems, also helped lay the basis for the internet
programming language "Java" and Sun Microsystems' "Jini."
Gelernter's earlier work on his parallel programming language "Linda" (which allows
you to distribute a computer program across a multitude of processors and thus
break down problems into a multitude of parts in order to solve them more quickly)
and "tuple spaces" underlies such modern-day systems as Sun's JavaSpaces, IBM's
T-Spaces, a Lucent company's new "InfernoSpaces" and many other descendants
worldwide.
By mid-'92 this set of ideas had taken hold and was exerting a strong influence . By
1993 the Internet was growing fast, and the Web was about to be launched.
Gelernter's research group at Yale was an acknowledged world leader in network
software and more important, it was known for "The Vision Thing", for the big
picture.
In June '93 everything stopped for Gelernter when he was critically injured by a
terrorist mailbomb. He was out of action for the rest of '93 and most of '94 as the
Web took off, the Internet become an international phenomenon and his aggressive
forecasts started to come true. Gelernter endured numerous surgeries through 95,
and then a long recuperation period.
Now Gelernter is back. In this audacious manifesto, "The Second Coming", he
writes: "Everything is up for grabs. Everything will change. There is a magnificent
sweep of intellectual landscape right in front of us.""
JB

DAVID GELERNTER, Professor of Computer Science at Yale University and adjunct


fellow at the Manhattan Institute, is a leading figure in the third generation of
Artificial Intelligence scientists, known for his programming language called "Linda"
that made it possible to link computers together to work on a single problem. He
has since emerged as one of the seminal thinkers in the field known as parallel, or
distributed, computing.

He is the author of Mirror Worlds (1991), The Muse In The Machine (1994), 1939:
The Lost World Of The Fair (1995), And Drawiing A Life: Surviving The Unabomber
(1998).
Click here for David Gelernter's Edge Bio Page

THE SECOND COMING A MANIFESTO


By David Gelernter
Any Microsecond Now
Computing will be transformed. It's not just that our problems are big, they are big
and obvious. It's not just that the solutions are simple, they are simple and right
under our noses. It's not just that hardware is more advanced than software; the
last big operating-systems breakthrough was the Macintosh, sixteen years ago, and
today's hottest item is Linux, which is a version of Unix, which was new in 1976.
Users react to the hard truth that commerical software applications tend to be
badly-designed, badly-made, incomprehensible and obsolete by blaming themselves
("Computers for Morons," "Operating Systems for Livestock"), and meanwhile,
money surges through our communal imagination like beer from burst barrels.
Billions. Naturally the atmosphere is a little strange; change is coming, soon.
Everything Old Is New Again
1. No matter how certain its eventual coming, an event whose exact time and form
of arrival are unknown vanishes when we picture the future. We tend not to believe
in the next big war or economic swing; we certainly don't believe in the next big
software revolution.
2. Because we don't believe in technological change (we only say we do), we accept
bad computer products with a shrug; we work around them, make the best of them
and (like fatalistic sixteenth-century French peasants) barely even notice their
defects instead of demanding that they be fixed and changed.
3. Everything is up for grabs. Everything will change. There is a magnificent sweep
of intellectual landscape right in front of us.
4. The Orwell law of the future: any new technology that can be tried will be. Like
Adam Smith's invisible hand (leading capitalist economies toward ever-increasing
wealth), Orwell's Law is an empirical fact of life.
Ripe Ready and hanging by a thread
5. We know that big developments are inevitable in the software world if only
because nothing in that world corresponds to a "book." You can see a book whole
from the outside. You know in advance how a book is laid out where the contents
or the index will be and how to "operate" one. As you work through it, you always
know where you stand: how far you have gone and how much is left. "Book" can be
a physical object or a text an abstraction with many interchangeable physical
embodiments. These properties don't hold for file systems or web sites. You can't
see or judge one from the outside, anticipate the lay-out, tell where you stand as
you work your way through.
Whenever we are organizing information, the book is too powerful an idea to do
without in some form or other.
6. Miniaturization was the big theme in the first age of computers: rising power,

falling prices, computers for everybody. Theme of the Second Age now approaching:
computing transcends computers. Information travels through a sea of anonymous,
interchangeable computers like a breeze through tall grass. A dekstop computer is a
scooped-out hole in the beach where information from the Cybersphere wells up like
seawater.
7. "The network is the computer" yes; but we're less interested in computers all
the time. The real topic in astronomy is the cosmos, not telescopes. The real topic in
computing is the Cybersphere and the cyberstructures in it, not the computers we
use as telescopes and tuners.
8. The software systems we depend on most today are operating systems (Unix, the
Macintosh OS, Windows et. al.) and browsers (Internet Explorer, Netscape
Communicator...). Operating systems are connectors that fasten users to
computers; they attach to the computer at one end, the user at the other. Browsers
fasten users to remote computers, to "servers" on the internet.
Today's operating systems and browsers are obsolete because people no longer
want to be connected to computers near ones OR remote ones. (They probably
never did). They want to be connected to information. In the future, people are
connected to cyberbodies; cyberbodies drift in the computational cosmos also
known as the Swarm, the Cybersphere.
From The Prim Pristine Net To The Omnipresent Swarm
9. The computing future is based on "cyberbodies" self-contained, neatly-ordered,
beautifully-laid-out collections of information, like immaculate giant gardens.
10. You will walk up to any "tuner" (a computer at home, work or the supermarket,
or a TV, a telephone, any kind of electronic device) and slip in a "calling card," which
identifes a cyberbody. The tuner tunes it in. The cyberbody arrives and settles in like
a bluebird perching on a branch.
11. Your whole electronic life will be stored in a cyberbody. You can summon it to
any tuner at any time.
12. By slipping it your calling card, you customize any electronic device you touch;
for as long as it holds your card, the machine knows your habits and preferences
better than you know them yourself.
13. Any well-designed next-generation electronic gadget will come with a ``Disable
Omniscience'' button.
14. The important challenge in computing today is to spend computing power, not
horde it.
16. The future is dense with computers. They will hang around everywhere in lush
growths like Spanish moss. They will swarm like locusts. But a swarm is not merely
a big crowd. The individuals in the swarm lose their identities. The computers that
make up this global swarm will blend together into the seamless substance of the
Cybersphere. Within the swarm, individual computers will be as anonymous as
molecules of air.
17. A cyberbody can be replicated or distributed over many computers; can inhabit
many computers at the same time. If the Cybersphere's computers are tiles in a
paved courtyard, a cyberbody is a cloud's drifting shadow covering many tiles
simultaneously.
18. But the Net will change radically before it dies. When you deal with a remote
web site, you largely bypass the power of your desktop in favor of the far-off power
of a web server. Using your powerful desktop computer as a mere channel to reach

web sites, reaching through and beyond it instead of using it, is like renting a
Hyundai and keeing your Porsche in the garage. Like executing programs out of disk
storage instead of main memory and cache. The Web makes the desktop impotent.
19. The power of desktop machines is a magnet that will reverse today's "everything
onto the Web!" trend. Desktop power will inevitably drag information out of remote
servers onto desktops.
20. If a million people use a Web site simultaneously, doesn't that mean that we
must have a heavy-duty remote server to keep them all happy? No; we could move
the site onto a million desktops and use the internet for coordination. The "site" is
like a military unit in the field, the general moving with his troops (or like a hockey
team in constant swarming motion). (We used essentially this technique to build the
first tuple space implementations. They seemed to depend on a shared server, but
the server was an illusion; there was no server, just a swarm of clients.) Could
Amazon.com be an itinerant horde instead of a fixed Central Command Post? Yes.
Stranger Than Fiction: Computers Today
21. The windows-menus-mouse "desktop" interface, invented by Xerox and Apple
and now universal, was a brilliant invention and is now obsolete. It wastes screenspace on meaningless images, fails to provide adequate clues to what is inside the
files represented by those blurry little images, forces users to choose icons for the
desktop when the system could choose them better itself, and keeps users jockeying
windows (like parking attendants rearranging cars in a pint-sized Manhattan lot) in a
losing battle for an unimpeded view of the workspace which is, ultimately,
unattainable. No such unimpeded view exists.
22. Icons and "collapsed views" seem new but we have met them before. Any book
has a "collapsed" or "iconified" view, namely its spine. An icon conveys far less
information that the average book spine and is much smaller. should it be much
smaller? Might a horizontal stack of "book spines" onscreen be more useful than a
clutter of icons?
23. The computer mouse was a brilliant invention, but we can see today that it is a
bad design. Like any device that must be moved and placed precisely, it ought to
provide tactile feedback; it doesn't.
24. Metaphors have a profound effect on computing. The desktop metaphor traps us
in a "broad" instead of "deep" arrangement of information that is fundamentally
wrong for computer screens. Compared to a standard page of words, an actual
desktop is big and a computer screen is small. A desktop is easily extended (use
drawers, other desks, tables, the floor); a computer screen is not.
25. Apple could have described its interface as a pure "information landscape," with
no connection to a desktop; we invented this landscape (they might have explained)
the way a landscape architect or amusement park designer invents a landscape. We
invented an ideal space for seeing and managing computerized information. Our
landscape is imaginary, but you can still enter and move around it. The computer
screen is the window of your vehicle, the face-shield of your diving-helmet.
26. Under the desktop metaphor, the screen IS the interface the interface is a
square foot or two of glowing colors on a glass panel. In the landscape metaphor,
the screen is just a viewing pane. When you look through it, you see the actual
interface lying beyond.
Problems On The Surface And Under The Surface
27. Modern computing is based on an analogy between computers and file cabinets
that is fundamentally wrong and affects nearly every move we make. (We store
"files" on disks, write "records," organize files into "folders" file-cabinet

language.) Computers are fundamentally unlike file cabinets because they can take
action.
28. Metaphors have a profound effect on computing: the file-cabinet metaphor traps
us in a "passive" instead of "active" view of information management that is
fundamentally wrong for computers.
29. The rigid file and directory system you are stuck with on your Mac or PC was
designed by programmers for programmers and is still a good system for
programmers. It is no good for non-programmers. It never was, and was never
intended to be.
30. If you have three pet dogs, give them names. If you have 10,000 head of cattle,
don't bother. Nowadays the idea of giving a name to every file on your computer is
ridiculous.
31. Our standard policy on file names has far-reaching consequences: doesn't
merely force us to make up names where no name is called for; also imposes strong
limits on our handling of an important class of documents ones that arrive from
the outside world. A newly-arrived email message (for example) can't stand on its
own as a separate document can't show up alongside other files in searches, sit
by itself on the desktop, be opened or printed independently; it has no name, so it
must be buried on arrival inside some existing file (the mail file) that does have a
name. The same holds for incoming photos and faxes, Web bookmarks, scanned
images...
32. You shouldn't have to put files in directories. The directories should reach out
and take them. If a file belongs in six directories, all six should reach out and grab it
automatically, simultaneously.
33. A file should be allowed to have no name, one name or many names. Many files
should be allowed to share one name. A file should be allowed to be in no directory,
one directory, or many directories. Many files should be allowed to share one
directory. Of these eight possibilities, only three are legal and the other five are
banned for no good reason.
Streams Of Time
34. In the beginning, computers dealt mainly in numbers and words. Today they
deal mainly with pictures. In a new period now emerging, they will deal mainly with
tangible time time made visible and concrete. Chronologies and timelines tend to
be awkward in the off-computer world of paper, but they are natural online.
35. Computers make alphabetical order obsolete.
36. File cabinets and human minds are information-storage systems. We could
model computerized information-storage on the mind instead of the file cabinet if we
wanted to.
37. Elements stored in a mind do not have names and are not organized into
folders; are retrieved not by name or folder but by contents. (Hear a voice, think of
a face: you've retrieved a memory that contains the voice as one component.) You
can see everything in your memory from the standpoint of past, present and future.
Using a file cabinet, you classify information when you put it in; minds classify
information when it is taken out. (Yesterday afternoon at four you stood with
Natasha on Fifth Avenue in the rain as you might recall when you are thinking
about "Fifth Avenue," "rain," "Natasha" or many other things. But you attached no
such labels to the memory when you acquired it. The classification happened
retrospectively.)
38. A "lifestream" organizes information not as a file cabinet does but roughly as a

mind does.
39. A lifestream is a sequence of all kinds of documents all the electronic
documents, digital photos, applications, Web bookmarks, rolodex cards, email
messages and every other digital information chunk in your life arranged from
oldest to youngest, constantly growing as new documents arrive, easy to browse
and search, with a past, present and future, appearing on your screen as a receding
parade of index cards. Documents have no names and there are no directories; you
retrieve elements by content: "Fifth Avenue" yields a sub-stream of every document
that mentions Fifth Avenue.
40. A stream flows because time flows, and the stream is a concrete representation
of time. The "now" line divides past from future. If you have a meeting at 10AM
tomorow, you put a reminder document in the future of your stream, at 10AM
tomorrow. It flows steadily towards now. When now equals 10AM tomorrow, the
reminder leaps over the now line and flows into the past. When you look at the
future of your stream you see your plans and appointments, flowing steadily out of
the future into the present, then the past.
41. You manage a lifestream using two basic controls, put and focus, which
correspond roughly to acquiring a new memory and remembering an old one.
42. To send email, you put a document on someone else's stream. To add a note to
your calendar, you put a document in the future of your own stream. To continue
work on an old document, put a copy at the head of your stream. Sending email,
updating the calendar, opening a document are three instances of the same
operation (put a document on a stream).
43. A substream (for example the "Fifth Avenue" substream) is like a conventional
directory except that it builds itself, automatically; it traps new documents as
they arrive; one document can be in many substreams; and a substream has the
same structure as the main stream a past, present and future; steady flow.
In The Age Of Tangible Time
44. The point of lifestreams isn't to shift from one software structure to another but
to shift the whole premise of computerized information: to stop building glorified file
cabinets and start building (simplified, abstract) artificial minds; and to store our
electronic lives inside.
45. A lifestream can replace the desktop and subsume the functions of the file
system, email system and calendar system. You can store a movie, TV station,
virtual museum, electronic store, course of instruction at any level, electronic
auction or an institution's past, present and future (its archives, its current news
and its future plans) in a lifestream. Many websites will be organized as lifestreams.
46. The lifestream (or some other system with the same properties) will become the
most important information-organizing structure in computing because even a
rough imitation of the human mind is vastly more powerful than the most
sophisticated file cabinet ever conceived.
47. Lifestreams (in preliminary form) are a successful commercial product today, but
my predictions have nothing to do with this product. Ultimately the product may
succeed or fail. The idea will succeed.
Living Timestreams
48. Lifestreams today are conventional information structures, stored at web sites
and tuned-in using browsers. In the future they will be cyberbodies.
49. Today's operating systems connect users to computers. In the future we will

deal directly with information, in the form of cyberbodies. Operating systems will
connect cyberbodies to computers; will allow cyberbodies to dock on computers.
Users won't deal with operating systems any more, and won't care about them. Your
computer's operating system will make as much difference to you as the voltage
level of a bit in memory.
50. A lifestream is a landscape you can navigate or fly over at any level. Flying
towards the start of the stream is "time travel" into the past.
45. You can walk alongside a lifestream (browsing or searching) or you can jump in
and be immersed in information.
51. A well-designed store or public building allows you to size up the whole space
from outside, or as soon as you walk in you see immediately how things are laid
out and roughly how large and deep the space is. Today's typical web site is a failure
because it is opaque. You ought to be able to see immediately (not deduce or
calculate) how the site is arranged, how big it is, how deep and how broad. It ought
to be transparent. (For an example of a "transparent" web site, Mirror Worlds
figure 7.6.)
52. Movies, TV shows, virtual museums and all sorts of other cultural products from
symphonies to baseball games will be stored in lifestreams. In other words: each
cultural product will be delivered to you in the form of an artifical mind. You will deal
with it not as you deal with an object but roughly as you do with a person.
Institutions Afloat In The Cybersphere
53. Your car, your school, your company and yourself are all one-track vehicles
moving forward through time, and they will each leave a stream-shaped cyberbody
(like an aircraft's contrail) behind them as they go. These vapor-trails of crystallized
experience will represent our first concrete answer to a hard question: what is a
company, a university, any sort of ongoing organization or institution, if its staff and
customers and owners can all change, its buildings be bulldozed, its site relocated
what's left? What is it? The answer: a lifestream in cyberspace.
54. A software or service company equals the employees plus the company
lifestream. Every employee has his own view of the communal stream. The
company's web site is the publically-accessible substream of the main company
stream. The company's lifestream is an electronic approximation of the company's
memories, its communal mind.
50. Lifestreams don't yield the "paperless office." (The "paperless office" is a bad
idea because paper is one of the most useful and valuable media ever invented.) But
lifestreams can turn office paper into a temporary medium for use, not storage.
"On paper" is a good place for information you want to use; a bad place for
information you want to store. In the stream-based office, for each newly-created or
-received paper document: scan it into the stream and throw it away. When you
need a paper document: find it in the stream; print it out; use it; if you wrote on
the paper while using it, scan it back in; throw it ou
55. Software can solve hard problems in two ways: by algorithm or by making
connections by delivering the problem to exactly the right human problem-solver.
The second technique is just as powerful as the first, but so far we have ignored it.
The Second Coming Of The Computer
56. Lifestreams and microcosms are the two most important cyberbody types; they
relate to each other as a single musical line relates to a single chord. The stream is
a "moment in space," the microcosm a moment in time.
57. Nowadays we use a scanner to transfer a document's electronic image into a

computer. Soon, the scanner will become a Cybersphere port of entry, an allpurpose in-box. Put any object in the in-box and the system develops an accurate
3D physical transcription, and drops the transcription into the cool dark well of
cyberspace. So the Cybersphere starts to take on just a hint of the textural richness
of real life.
We'll know the system is working when a butterfly wanders into the in-box and (a
few wingbeats later) flutters out and in that brief interval the system has
transcribed the creature's appearance and analyzed its way of moving, and the real
butterfly leaves a shadow-butterfly behind. Some time soon afterward you'll be
examining some tedious electronic document and a cyber-butterfly will appear at
the bottom left corner of your screen (maybe a Hamearis lucina) and pause there,
briefly hiding the text (and showing its neatly-folded rusty-chocolate wings like
Victorian paisley, with orange eyespots) and moments later will have crossed the
screen and be gone.
But What Does It All Matter?
58. If you have plenty of money, the best consequence (so they say) is that you no
longer need to think about money. In the future we will have plenty of technology
and the best consequence will be that we will no longer have to think about
technology.
We will return with gratitude and relief to the topics that actually count.

EDGE IN THE NEWS

The New York Times

Critic Sees Flaws in Microsoft's Strategy


An Influential Scientist CallsFocus on Web Browsing a Mistake
By John Markoff
June 19, 2000

Mr. Gelernter's argument is spelled out in "The Second Coming -- a Manifesto," an


essay published last week in the German newspaper Frankfurter Allgemeine Zeitung,
and posted on the Edge, a technology forum on he Web (www.edge.org).
As Microsoft prepares to announce its Next Generation Windows Services initiative
this week, an influential computer scientist is circulating a thesis that challenges
William H. Gates's vision of the future. .......
.Microsoft has based its reputation on refusing to lead and always following, and
once again they're behind the wave here," said Mr. Gelernter, a respected Yale
University computer scientist. "More and more people are coming to understand that
the power of desktop machines is enormous and is largely wasted when you spend
your time browsing on the Web.
Mr. Gelernter's argument is spelled out in "The Second Coming -- a Manifesto," an
essay published last week in the German newspaper Frankfurter Allgemeine Zeitung,
and posted on the Edge, a technology forum on he Web (www.edge.org).
Mr. Gelernter's critique has some influential supporters, including including Danny
Hillis, a computer scientist who recently left Walt Disney's Imagineering research
group to form a new company, Applied Minds; David Ditzel, a computer designer
who is the founder of Transmeta Inc., a Silicon Valley microprocessor company; and
Rodney

Brooks, director of the Massachusetts Institute of Technology's Artificial Intelligence


Laboratory."David's criticisms of our current computing environments are eloquently
stated, and I think widely shared," Mr. Brooks wrote in a recent comment posted on
the Internet.
But Microsoft's head of research, Rick Rashid, countered that Mr. Gelernter was
taking a long-term view of computing that might have little relevance for the current
software market. "It's fairly predictable that David would be saying this," said Mr.
Rashid, a Microsoft senior vice president. This has been his mantra throughout his
career. ........
Click here for the article on "THE NEW YORK TIMES on the Web"

THE REALITY CLUB


Stewart Brand, David Ditzel, John C. Dvorak, Freeman Dyson, George Dyson,
Douglas Rushkoff, Rod Brooks, Lee Smolin, Jaron Lanier, David Farber, Danny Hillis,
Vinod Kholsa, John McCarthy on "The Second Coming - A Manifesto" by David
Gelernter
Marc Hauser, Milford Wolpoff, V.S. Ramachandran, and Nicholas Humphrey on V.S.
Ramachandran's "Mirror Neurons and imitation learning as the driving force behind
"the great leap forward" in human evolution"

THE REALITY CLUB


Responses to "The Second Coming - A Manifesto" by David Gelernter
Stewart Brand: The sequence is clear. From "the user is a luser" (early programmer
joke) to "the user wins" to "the user rules" (eg. Napster) and "the user creates" (the
Web) to, with Gelertner, "the user is the system."
David Ditzel: Gelernter is ahead of us all in peering through the fog that we call the
future of technology.
John C. Dvorak: Bill Gates will love reading this stuff. Hating it will be the Ellisons
and McNealys of the world whose goal is to de-ball the personal computer and
replace it with a thin client running eunuchs.
Feeman Dyson: I suspect that he has a one-sided view of computing. I suspect that
cyberspace will also be dominated by tools, as far into the future as we can imagine.
The topography of our future cyberspace will be determined more by new tools than
by Gelernter's vision.
George Dyson: Let us hope that Gelernter's prophecies continue to be fulfilled. The
sooner spines replace icons the better would you rather work in a library where
the books are shelved at eye-level or left lying face-up all over the floor??
Douglas Rushkoff: ...the trick to seeing through today's interfaces a way of
envisioning information architecture that David does effortlessly involves
distinguishing between our modeling systems and the models they build.
Rod Brooks: David Gelernter is no doubt right on about the coming revolution, but
as with all revolutions it is hard to predict the details of how it will play out. I
suspect he is wrong on the details of cyberbodies and his lifestreams.
Lee Smolin: I have the sense that David's manifesto is a bit like the predictions I
read as a child that by the 21st century cars would have evolved wings and we

would all be flying to work. The technology of cars has improved a bit since then,
but the basic experience of driving is almost exactly the same.
Jaron Lanier: This reminds of Marx's vision of what should happen after the
revolution. He imagined we'd be reading the classics and practicing archery!
Idealists always believe there's some more meaningful, less dreary plane of
existence that can be found in this life.
David Farber: We are at the edge of a real dramatic change in technology. For the
past decade we have evolved from a view that the network is just a way of
connecting computers together to the current view that the network is the action to
the view often stated (by me and others) that no one cares about the network but
only what they can access and interact with information and people.
Danny Hillis:David Gelernter is basically right: current generation computer
interfaces are not very good. (Since we are all among friends here, we can say it:
they suck).
Vinod Kholsa: Transition strategies here will significantly impact the end state.
John McCarthy: Unfortunately, the making of computer systems and software is
dominated by the ideology of the omnipotent programmer (or web site designer)
who knows how the user (regarded as a child) should think and reduces the user's
control to pointing and clicking. This ideology has left even the most sophisticated
users in a helpless position compared to where they were 40 years ago in the late
1950s.

From: Stewart Brand


Date: June 11, 2000
It's a great screed, inspiring and generative. It is a frame of reference worth filling
with reality.
For me, Gelertner's manifesto speaks to widespread growing aggravation with the
current system and growing impatience with the burgeoning tech possibilities not
being addressed at a deep enough level. "About time!" was my gut response.
The sequence is clear. From "the user is a luser" (early programmer joke) to "the
user wins" to "the user rules" (eg. Napster) and "the user creates" (the Web) to,
with Gelertner, "the user is the system."
The still unanswered question though is: How does this system fare over time? How
does it keep from the self-obsolescing self-erasure endemic to current computer
tech? How do the lifestream contrails keep their shape amid ferociously turbulent
winds? Those winds are not extraneous to the system; they are how the system
grows.
STEWART BRAND is founder of the Whole Earth Catalog, cofounder of The Well,
cofounder of Global Business Network, cofounder and president of The Long Now
Foundation. He is the original editor of The Whole Earth Catalog, Author Of The
Media Lab: Inventing The Future At Mit, How Buildings Learn, and The Clock Of The
Long Now: Time And Responsibility (MasterMinds Series).

From: David Ditzel


Date: June 11, 2000
David Gelernter's Manifesto is a humbling document read, because it points out the
generally unrecognized, but herein revealed truth that we are only at the beginning

of understanding how the evolution of the internet is going to change our lives.
Gelernter is ahead of us all in peering through the fog that we call the future of
technology.
DAVID DITZEL is CEO, Transmeta Corporation

From: John C. Dvorak


Date: June 12, 2000
Finally, someone who knows what they're talking about and who isn't simply viewed
as a embittered cynic tells it like it is regarding the notion of remote computing
among other dumb ideas. Bill Gates will love reading this stuff. Hating it will be the
Ellisons and McNealys of the world whose goal is to de-ball the personal computer
and replace it with a thin client running eunuchs. I also like his slamming the
dubious concept of a computer "Desktop" and trashing the idea of file folders and
other computer commonplaces promoted by the charismatic Steve Jobs and copied
lockstep by Gates and company. Unfortunately all the points in the manifesto are
right but otiose. Trends and fads promoted by strength of personality whether it be
Fascism, rap music, thong bikinis or the WIMP (windows icons mouse pointer)
interface are not easy to reverse. It's the mechanism of trend reversal that needs
study and comment. A laundry list of all that is wrong with computing today is an
exercise in futility when hero worship and sheep-like behavior are the norm. This
manifesto will amount to nothing in the end. A shame.
JOHN C. DVORAK is the host of Silicon Spin on ZDTV. He is a contributing editor of
PC Magazine, where he has been writing two columns, including the popular "Inside
Track," since 1986.

From: Freeman Dyson


Date: June 12, 2000
Thank you very much for sending the Gelernter manifesto, full of wonderful imagery
and eloquence. Here are some brief comments.
Gelernter lays out a grand vision of cyberbodies and lifestreams inhabiting the
cyberspace of the future. He brings his vision to life with images that every child can
understand, the bluebird perching on a branch, the cloud's shadow drifting across
the paved courtyard. There will be a place for humans, even for children, in his
cyberspace. In his vision of the future, we shall no longer be parking cars in a pintsized Manhattan parking-lot. We shall be flying free in cyberspace, leaving behind
vapor trails of experience and memory for other humans to explore.
Fifty years ago we heard about a different vision of a possible future. We heard that
the automobile would soon be obsolete, its mobility diminished by the constantly
increasing density of traffic, its destructive effect on the environment no longer
tolerable in a civilized society. We heard that the automobile would soon be replaced
by the helicopter as the preferred vehicle for personal transportation. We would
soon be living in a three dimensional world, with helipads replacing garages beside
our homes. The reasons why that vision of a roadless civilization never materialized
are obvious. Helicopters remained noisy, accident-prone and expensive, roads and
automobiles turned out to be unexpectedly resilient. The vision was beautiful, but
the tools to make it real were defective.
Gelernter's vision is also beautiful, and his scornful sweeping of existing computers
and operating systems into the dustbin of history is persuasive. The chief question

that his vision raises is, whether we shall have the tools to make it real. Gelernter
disparages tools. He says, "The real topic in astronomy is the cosmos, not
telescopes. The real topic in computing is the cybersphere and the cyberstructures
in it, not the computers ... ''. I know more about astronomy than about computing. I
can certify that he has a one-sided view of astronomy. Modern astronomy is
dominated by tools. It is about telescopes and spacecraft as much as it is about the
cosmos that these tools explore. Every time we introduce a new tool, we see a new
cosmos. And I suspect that he has a one-sided view of computing. I suspect that
cyberspace will also be dominated by tools, as far into the future as we can imagine.
The topography of our future cyberspace will be determined more by new tools than
by Gelernter's vision. Still, he has pointed the way for the next generation of tool
builders to follow. We must hope that they will be more successful than the builders
of helicopters fifty years ago. If the tool-builders can build tools to match his vision,
then our children and grandchildren might see the Second Coming and live in the
world of Gelernter's dreams.

FREEMAN DYSON is professor of physics at the Institute for Advanced Study, in


Princeton. His professional interests are in mathematics and astronomy. Among his
many books are Disturbing The Universe, Infinite In All Directions Origins Of Life,
From Eros To Gaia, Imagined Worlds, And The Sun, The Genome, and The Internet.

From: George Dyson


Date: June 12, 2000
Let us hope that Gelernter's prophecies continue to be fulfilled. The sooner spines
replace icons the better would you rather work in a library where the books are
shelved at eye-level or left lying face-up all over the floor??
For fifty years, digital computing has rested upon two invariant foundations: the
program (as given by Turing) and the address matrix (as given by von Neumann
and Bigelow). Who could have imagined, 50 years ago, that we would load millions
of lines of 'machine-building' code just to check our mail, or that an international
political organization would be charged with supervising the orderly assignment of
unambiguous coordinates to every bit of memory connected to the net?
Only a third miracle dirt-cheap, near-perfect microprocessing allows a system
as inherently intolerant of error and ambiguity to work as well as it does today.
Gelernter is right: a revolution is overdue. And underway.
In molecular biology, addressing of data and execution of order codes is
accomplished by reference to local templates, not by reference to some absolute or
hierarchical system of numerical address. The instructions say "do x with the next
copy of y that comes along" without specifying which copy, or where. This ability
to take general, organized advantage of local, haphazard processes is exactly the
ability that (so far) has distinguished information processing in living organisms
from information processing in digital computers. This is not to suggest an
overthrow of the address matrix which is with us to stay. But software that takes
advantage of template-based addressing will rapidly gain the upper hand.
The other foundation, the program, is based on the fact that digital computers are
able to solve most but not all problems that can be stated in finite,
unambiguous terms. They may, however, take a very long time to produce an
answer (in which case you build faster computers) or it may take a very long time to
ask the question (in which case you hire more programmers). For fifty years,
computers have been getting better and better at providing answers but only to
questions that programmers are able to ask.
I am not talking about non-computable problems. Despite the perennial attentions

of philosophers, in the day-to-day world such problems remain scarce. There is,
however, a third sector to the computational universe: the realm of questions whose
answers are, in principle, computable, but that, in practice, we are unable to ask in
unambiguous language that computers can understand. This is where brains beat
computers. In the real world, most of the time, finding an answer is easier than
defining the question. It's easier to draw something that looks like a cat than to
describe what, exactly, makes something look like a cat. A child scribbles
indiscriminately, and eventually something appears that happens to resemble a cat.
A solution finds the problem, not the other way around. The world starts making
sense, and the meaningless scribbles are left behind. This is the power of that Mirror
World we now perceive as the Internet and the World Wide Web.
"An argument in favor of building a machine with initial randomness is that, if it is
large enough, it will contain every network that will ever be required," advised
cryptanalyst Irving J. Good, speaking at IBM in 1958. Even a relatively simple
network contains solutions, waiting to be discovered, to problems that need not be
explicitly defined. The network can and will answer questions that all the
programmers in the world would never have time to ask.
GEORGE DYSON is a leading authority in the field of Russian Aleut kayaks the
subject of his book Baidarka, numerous articles, and a segment of the PBS
television show Scientific American Frontiers. His early life and work was portrayed
in 1978 by Kenneth Brower in his classic dual biography, The Starship And The
Canoe. Now ranging more widely as a historian of technology, Dyson's most recent
book is Darwin Among The Machines.

From: Douglas Rushkoff


Date: June 12, 2000
David Gelernter's "The Second Coming" reminds me just how arbitrarily so many of
our decisions about how to do computing and networking have been reached.
Techniques for sharing super-computing resources or keeping lines of code ready for
a compiler have, through their very legacies, become the architectural basis for
humanity's shared information space.
It seems to me that the trick to seeing through today's interfaces a way of
envisioning information architecture that David does effortlessly involves
distinguishing between our modeling systems and the models they build. While
memory, information, hardware, and software might need to conform to certain
realities, the very opacity of our current operating systems (both technological and
social) imply an immutability that just isn't real. The only obstacles to this
unencumbered perception of memory, information, storage, and interaction are our
own prejudices, formed either randomly or by long-obsolete priorities, and kept in
place by market forces.
DOUGLAS RUSHKOFF, a Professor of Media Culture at New York University's
Interactive Telecommunications Program, is an author, lecturer, and social theorist.
His books include Free Rides, Cyberia: Life In The Trenches Of Hyperspace, The
Genx Reader (Editor), Media Virus! Hidden Agendas In Popular Culture, Ecstasy Club
(A Novel), Playing The Future, and Coercion: Why We Listen To What "They" Say.

From: Rodney Brooks


Date: June 13, 2000
David Gelernter is no doubt right on about the coming revolution, but as with all
revolutions it is hard to predict the details of how it will play out. I suspect he is

wrong on the details of cyberbodies and his lifestreams. The first because as framed
it relies still on a physical icon to identify the body, and the second because it is just
one metaphor that many will find inconvenient. In the following paragraphs I'll
outline my own versions of what the revolution will bring in these two departments,
and no doubt my visions will be as wrong or more than David's.
But first the actuality of the revolution. David's criticisms of our current computing
environments are eloquently stated, and I think widely shared. A number of projects
were started about a year ago, originally through a DARPA sponsored `Computing
Expeditions' program. At CMU the expedition is called "Aura", at Berkely it is
"Endeavour" (named for Cook's ship, and hence the spelling), at the University of
Washington/Xerox Parc it is called "Portolano/Workscapes". At MIT, Michael
Dertouzos, Anant Agarwal and I are leading "Project Oxygen" dedicated to pervasive
human-centered computing. The common theme across all these projects is that
human time and attention is the limiting factor in the future, not computation speed,
bandwidth, or storage.
In the past the human has been forced to climb into the computer's world. First with
binary, and holes punched in cards, and then later by physically approaching that
"square foot or two of glowing colors on a glass panel", and being drawn into its
virtual desktop with metaphors bogged down by copies of physical constraints in
real offices. In MIT's Project Oxygen, a joint project of the Laboratory for Computer
Science and the Artificial Intelligence Lab, we are trying to drag the computer out
into the world of people. Computers are fast enough now to see and hear---and
these are the principal modalities which we use to interact with other people. We are
making our machines interact with people through these same modalities, using the
perceptual capabilities of people rather than forcing them to rely on their cognitive
abilities just to handle the interface. Cognitive capabilities should be reserved for the
real things that people want to do.
Now for cyberbodies and lifestreams. By making computation people centric it
should not matter whether I am in your office or mine, whether I pick up your PDA
or mine, whether I pick up your cell phone or mine. Wherever I am the system
should adapt to my identity whether I am carrying a "calling card" or not. It should
adapt to me, not to yet another technological decoration that I need to carry
around. And it should be automatic and secure as it does this. Just as people can tell
my identity through vision and sound so too can our machines. Furthermore, as
computation is cheap, much cheaper these days than special purpose circuitry (and
wherever that is not true yet, it will soon be), there is no need for artifacts to have
any particular identity. According to my needs at that instant, the machine in my
hand should be able to morph from being a PDA to a cell phone, to an MP3/Napster
player, just be changing the digital signal processing it is doing. Physics requires a
little bit in the way of an aerial, but beyond that demodulation, etc., can be in
software. And then the systems should handle bandwidth restrictions behind my
back, performing vertical hand-off between protocols as invisibly as today's cell
phones perform horizontal hand-off between cells.
Lifestreams are one sort of metaphor. We will not be subject to the tyranny of a
single metaphor as we are subject today to the desktop metaphor which Gelernter
so masterfully scorns. For a lot of my everyday work I will prefer a metaphor of a
personal assistant. I tell it something, and it takes care of the details, watching over
me and only interceding when it sees that I need help, pulling in all the necessary
information from wherever it is located, perhaps cached ahead of time in
anticipation of my needs. After working with me for many years my human personal
assistant knows so many details of my life and interactions that I can entrust her to
handle many of interactions with the world, without me ever providing any
supervision. I will want a similar relationship with my computation. Others might
prefer a geographical metaphor, zooming around through a virtual world, while a
few might like the lifestreams metaphor. Once a few of these metaphors get
invented and tried out, there will be a deluge of new metaphors as the young
hackers attack the interface problem with a vengeance.

RODNEY A. BROOKS is Director of the MIT Artificial Intelligence Laboratory, and


Fujitsu Professor of Computer Science. He is also Chairman and Chief Technical
Officer of IS Robotics, an 85 person robotics company. Dr. Brooks also appeared as
one of the four principals in the Errol Morris movie "Fast, Cheap, and Out of Control"
(named after one of his papers in the Journal of the British Interplanetary Society)
in 1997 (one of Roger Ebert's 10 best films of the year).

From: Lee Smolin


Date: June 13, 2000
David Gelernter has a wonderful imagination and I am a bit afraid to contradict him,
as he has obviously spent much more time thinking about the future of computing
than I have. I am intrigued by many of the things he proposes. But let me say a
word in defense of the present Macintosh system. I do suspect that some computer
scientists have forgotten just how revolutionary and useful the Mac operating
system is, and may be underestimating the longevity of this particular technology.
It is true that the Macintosh operating system is based on the old fashioned
metaphor of a desktop and filing cabinet. But I find that metaphor very useful. I do
think of my computer as a very efficient and useful filing cabinet. I like the fact that
the files have names and that I can search for them efficiently in several different
ways. I like the hierarchical structure of directories. I like the fact that email is
different from ordinary files, and I am happy that it only takes a few key strokes to
turn an email into a file if I need it to be one, or vise versa.
I also like the limited area of the desktop on my powerbook screen. At work I have a
Silicon Graphics which works a bit more like David wants: one can have many
different desktops for different purposes and each can be much bigger than the
screen, even though that is many times the size of the screen on my powerbook.
But I find that I don't use any of these added features. It is too hard to remember
how to use them, and I find that when I try to I often loose windows and icons
which are off the screen. What is good about the desktop is that it is so limited. I
can have piles of windows open at once, but I know where they all are. When there
are too many I know I have to close some, which forces me to do a bit of cleaning
up. It is like having to clean up ones desk when it overflows. Only unlike my real
desk, which I can simply ignore, I do have to deal with my desktop and clean it up
from time to time to keep working. I find this very useful as it enforces a minimal
level of organization in my work habits.
What David is describing is a computer which would work more like my own mind.
But I am not sure I need a computer of this kind. Perhaps I do, I've never had one.
But I do already have quite a good associative memory. My guess is that its
limitations are built in, as there is an inevitable compromise between the vividness
of memory and associations and alertness to the present. I would not want going to
my computer to work to be like opening a box of old letters and photographs or
facing the task of throwing away old magazines that I never got to read. With a
computer like this I might never get anything done. More than anything what I like
about my computer is that it does not offer me any information that I don't ask for.
What has gotten so distasteful about going on line is the imposition of unwanted
information. The web was a lot more useful before pages began to be crowded with
advertising and unwanted information. The sites I use mostly are the ones that offer
the least possibilities for diversion from what I am seeking. If randomness and
unpredictability were built into the experience of computing it would cease to
become a useful tool for me. Not enough has been said about the way that one site
can change the working habits of a whole profession, by changing the way we
communicate with each other. This is true of the xxx.lanl.gov site, which is now the

universal tool for publication in physics and math. It is tightly and rigidly structured,
and that is what makes it so useful. It is an extremely good filing cabinet, so good
that it replaces many filing cabinets in thousands of offices all over the world.
I also don't like the metaphor of organizing my interface with the computer in terms
of the flow of real time. Another very good aspect of my computer is that it provides
the illusion that time can be frozen. I can work on several projects at once, and
each one is exactly where I left it when I go back to it. In the context of a very busy
life, full of travel and unexpected demands and developments, my computer
provides an oasis in which time advances in each window only when I pay attention
to it.
So I don't need a computer to enhance my imagination or associative memory. I
need a computer that counteracts the effects of my own too active imagination and
too busy schedule. Because of this I know that a computer that works the way my
powerbook does is something I will always need. And what makes my powerbook so
useful is the fact that it works so differently than I do. The fact that all the files have
names and locations in a hierarchical system is part of what makes it so useful.
When I want to find a paper I wrote three years ago on quantum geometry I want
to be able to pull up that file right away, not every file I wrote in the last five years
about some aspect of quantum geometry. Every once in a while I loose something
and it might be good to have a search machine that worked associatively. But not
very often.
I do agree with a lot of what David says. I can imagine lots of improvements on the
present Mac operating system. Some of the things he suggests would be very
useful. And of course the idea of a kind of cyber-agent who represents me in
cyberspace is intriguing and perhaps useful. But I have the sense that David's
manifesto is a bit like the predictions I read as a child that by the 21st century cars
would have evolved wings and we would all be flying to work. The technology of
cars has improved a bit since then, but the basic experience of driving is almost
exactly the same. Personally I don't cherish that experience so I prefer living in
places where one can get almost everywhere by public transportation. Here in
London at the beginning of the 21st century the only people who helicopter to work
regularly are a few wealthy businessmen and a few members of the royal family.
LEE SMOLIN is a theoretical physicist; professor of physics and member of the
Center for Gravitational Physics and Geometry at Pennsylvania State University;
author of The Life of the Cosmos.

From: Jaron Lanier


Date: June 13, 2000
I'm so delighted that David is still fighting the good fight, an idealist after all these
years. Greed and even satisfied wealth have proven to be agents of distraction to all
too many cyberdreamers. It's becoming ever more rare to find a young student with
even half of David's quotient of fire in his/her soul about the potential for beauty
and meaning in digital tools.
So, while I will offer some criticisms below, I hope they will be read as friendly and
supportive.
David falls into a common trap that has snagged many a visionary over the years.
He thinks about ideal Platonic computers instead of real computers. A billion Platonic
computers support a seamless virtual space in which programs fly about
unconcerned with which real computer might be visited at a given moment. A billion
real computers, in contrast, require a ten million human beings to run helpdesks,
many thousands more to fight lawsuits over software compatibility, and a few
hundred more to track malicious viruses that invade the automated virus tracking

software that never quite worked.


Real computers, unlike ideal computers, are the first machines that require an
infinite rather than a finite amount of human labor for their maintenance. Real
computers are less likely to allow us to forget them than any other gadget in the
history of invention.
Furthermore, in order for a Platonic computer to appear, human good will and good
taste will have to precede it. There will have to be no Bill Gates who forces
technological sensibility into a retrograde motion in order to gain power.
In order for a Platonic computer to appear, humans will have to understand how to
write large programs that interface with the real world in such a way that they are
both modifiable, secure, immune to becoming the bearers of future legacy
headaches, and amenable to decent user interface design. We simply don't know
how to write such programs yet. I expect us to learn to do it someday, in the same
way I expect us to be able to build anti-gravity devices someday. I am idealistic, but
not for progress in any relevant timeframe.
Moore's Law simply doesn't apply to software as it does to hardware. Software uses
every opportunity to get worse instead of better. More memory means more bloat.
More users means more incentives not to change, which means more legacy
highwire Band-Aids. Software is like culture, starting out fresh and becoming
decadent.
Having said all that, I love David's vision. Reading it inspired me to dig up a bit of
my old ranting about what virtual reality software should look like. As it happens, I
was hoping for something very much like Lifestreams back in the mid-80s.*
(*See http://www.advanced.org/jaron/vrint.html)
As I re-read this old material now, about fifteen years later, it seems a little naive.
Surely I didn't think I'd play back virtualized memories as if they were on tape, fastforwarding and reversing. That works for a single movie, but is no more possible for
a lifetime than naming all those 10,000 cows. How would I break memories into
atomic units so that they could be summarized or re-ordered? Would I just see a
little bit of each room I entered? Maybe rooms aren't the right divider markers for
memories. I'd have to impose some ontology onto my memories in order to be able
to reduce them enough to search through them and manipulate them. I can't deal
with my memories in an unreduced form, because I don't have the time. (This is the
temporal version of the old Borges story about the map as big as the country it
represents.) The fact that my memories must be automatically reduced in order to
be usable brings up another problem area in David's vision.
Although it isn't immediately apparent, there's a an implicit reliance on Artificial
Intelligence in David's manifesto. Somehow the cybertraces that one leaves as one
flits about the cyberuniverse, carefree like a butterfly, must at some point be parsed
(according to that magic ontology) to be useable in the future. Either there's a
sweatshop of third world workers going over the life experiences of every wired
citizen of the industrialized world, or there are computer algorithms doing the job.
Maybe by now my colleagues on this list are sick of my unyielding stance on AI, but
I must repeat once again that Artificial Intelligence just stinks. It's a phony effect.
You can't get something for nothing; the computer can't add wisdom to the mix. Or
if you believe it can, I feel you've reduced yourself in a deep way, morally and
esthetically. Think of the Turing Test: How can the judge know if the computer has
gotten smart or if the person has gotten stupid? How can you know if those
omniscient credit rating algorithms are brilliant, or if you're being an idiot by
borrowing money when you don't need to in order to feed the algorithm with data?
Once again, I feel a tension between the ideal and the real. I am sold on the

Lifestreams vision, on David's whole package, but I think the experience of using it
will be extremely labor intensive, for me and for everybody.
And utterly worth all the trouble.
I must reject the final paragraph of the manifesto, which imagines an aspect of life
more meaningful than technology, which we will be free to pursue when we can
forget about technology. This reminds of Marx's vision of what should happen after
the revolution. He imagined we'd be reading the classics and practicing archery!
Idealists always believe there's some more meaningful, less dreary plane of
existence that can be found in this life. All we have to do is fix this hunking mess in
front of us and we'll get there.
A lovely belief to hold!
JARON LANIER , a computer scientist and musician, is a pioneer of virtual reality,
and founder and former CEO of VPL. He is currently the lead scientist for the
National Tele-Immersion Initiative

From: David Farber


Date: June 14, 2000
Gelernter's manifesto is certainly well written. It is flowery and eloquently stated.
However, why is there always a "however", it introduces new terms but not that
many new ideas that have not been often expressed.
We are at the edge of a real dramatic change in technology. For the past decade we
have evolved from a view that the network is just a way of connecting computers
together to the current view that the network is the action to the view often stated
(by me and others) that no one cares about the network but only what they can
access and interact with information and people.
We are about to replace our old slow electro-optical communications systems with
all optical end to end systems. This technology offers an enormous increase in bits
per second. One stand of fiber can carry more bits per second than the entire
current national backbone. This will cause a dramatic change in every thing we have
now. We will have to re-think our network protocols, the architecture of our
computers and just what we mean by a computer and software. Old ideas will soon
go the way of the big mainframe operating systems and computers.
Back to the manifesto. It blends well into this rethinking process that the new
technology will force. It would be unfortunate if the result of this re
conceptualization ended up with the same old appearance and world model to users.
The manifesto is a major step in making sure that does not happen. Let's just
realize that the ideas are not new - they reflect the ideas of many people over many
years. Now we need an industrial structure that allows these ideas to be developed
and marketed!
DAVID FARBER, considered by many to be the grandfather of the Internet, is Chief
Technologist, Federal Communications Commission.

From: Danny Hillis


Date: June 16, 2000
David Gelernter is basically right: current generation computer interfaces are not
very good. (Since we are all among friends here, we can say it: they suck). The

ubiquitous windows desktop is a classical example of "early lock in", like the Qwerty
keyboard and strange conventions for English spelling. These are both generally
acknowledged as unfortunate accidents of history. They are non-optimal, but not
quite bad enough to be worth changing. In fact, the standard computer interface in
incorporates both of these awful interfaces, yet interestingly, Gelernter does not
suggest changing them.
Are we at the point where the desk top computer interface will be thrown out and
replaced with something better? Is the computer desktop like the Roman alphabet,
which we have learned to live with in spite of its quirks, or is like the Roman system
of numerals, which we have pretty much abandoned? As much as I like the idea like
the idea of starting with a clean slate, I think it is more like the alphabet than the
numerals, and it is more likely that the desk top interface will be improved than
abandoned. Most of the specific improvements that Geletner suggests, like content
addressing, time-linking and multiple names, can be and are being incorporated into
standard interfaces. It won't be elegant, but it will work.
So does this mean that we are doomed to a millennium of Windows 2xxx? I doubt it.
As Scott McNealy is fond of pointing out, current PC operating systems are unwieldy
"hair balls" of accumulated history. Eventually, someone will start from scratch and
build something better. But I would be surprised if they start by throwing out the
part that most users are the most comfortable with, which is the metaphor physical
document handing. The replacement, when it emerges, will win by doing a better
job of the same thing.
Yet, there is also a second type of competition, which is not so much a replacement
as an addition. Computers are useful for more than handling documents, and other
interfaces will be developed for these other functions. These are interfaces more
likely to nurture the emergence of radical new ideas. If David Geletner really wants
to invent a new interface(and he would probably be good at it) he should forget
about looking for a better way to handle documents, and start think about a
computer that handles ideas.
W. DANIEL HILLIS, former vice president of research and development at The Walt
Disney Company, is the co-founder of a startup, Applied Minds. He is the author of
book, The Pattern On The Stone: The Simple Ideas That Make Computers Work.

From: Vinod Khosla


Date: June 18, 2000
A brief scan leads to the impression that while "the second coming" is inevitable,
like most technologies, the path to getting there often changes the end we get to.
Transition strategies here will significantly impact the end state.
VINOD KHOSLA is a partner in the venture capital firm Kleiner Perkins Caufield &
Byers. He was a co-founder of Daisy Systems and founding Chief Executive Officer
of Sun Microsystems.

From: John McCarthy


Date: June 18, 2000
Comments on the Gelernter Manifesto
i. I found a lot wrong with the manifesto, so I'll begin with something I found usable
in it. Gelernter grumbles in item 31 that since email messages aren't files they don't
have names and can't stand on their own. I also find it a problem, and it occurred to
me how to mitigate the problem in my own mail reader which is within my word
processor.

Suppose I'm reading a message that I consider significant. Typing a single command
inserts a reference to the appropriate page in the message file at the end of a
special file of messages, puts in the time, and puts me where I can add an
identifying comment. The entry for the email with the manifesto is "Sat Jun 17
12:48:28 2000 /u/jmc/RMAIL.S00==1906 Gelernter Manifesto", giving the time, the
location of the message in the mail file and the name I gave the message.
If I later click on that line, I'll be reading the message again.
The purpose of messages having names of some sort is so that the receiver can
retrieve a message later. I doubt that such a name can be automatically generated
from the message itself, because the subject line, etc. are in the mental space of the
sender, not the receiver. The receiver has to somehow give the message a name if
he wants to be able to subsequently retrieve it in one step. In this case, I chose
"Gelernter Manifesto".
It took 12 minutes to write and debug the message naming facility in the Xemacs
editor. The MS-Word users I consulted told me that it would be very difficult to script
MS-word and Windows email systems to do it.
ii. We all find ourselves repeating essentially the same tasks in using computers.
Here's a slogan.
Anything a user can do himself, he should be able to make the computer do for him.
Fully realizing this slogan would be a big step, but even a little helps. It's called
letting the user "customize" his environment. Point i above is a small example.
Unfortunately, the making of computer systems and software is dominated by the
ideology of the omnipotent programmer (or web site designer) who knows how the
user (regarded as a child) should think and reduces the user's control to pointing
and clicking. This ideology has left even the most sophisticated users in a helpless
position compared to where they were 40 years ago in the late 1950s.
Scripting languages were a start in the direction of giving the user more power, but
the present ones aren't much good, and not even programmers use them much to
make their own lives simpler. Scripting is particularly awkward for point and click
use. Xemacs customization is reasonably convenient, but it isn't contiguous with
Xemacs Lisp, a really good programming language.
Linux is a step in the right direction of giving the user control in that the source of
the operating system is available to users, but I doubt that many users, change
Linux for purely personal convenience.
Back to Gelernter
iii. Most of the Manifesto's metaphors, e.g. "beer from burst barrels" and "scooped
out hole in the beach", aren't informative.
iv. In item 4, Gelernter offers
The Orwell law of the future: any new technology that CAN be tried WILL be. Like
Adam Smith's invisible hand (leading capitalist economies toward ever increasing
wealth), Orwell's Law is an empirical fact of life.
It isn't true, and I don't believe Orwell said it. In the preface to "1984", Orwell wrote
that "1984" is a cautionary tale that he didn't expect to happen. In particular,
"1984" has the tv that permitted Big Brother's minions to spy on the viewer. I don't
think Orwell expect that to be tried, and it hasn't been.
Indeed the reverse is true. Most possible new technologies are never tried.

v. Gelernter, like many other commentators, is glib about the system software and
its documentation being bad. Don Norman beat that drum, and Apple hired him to
make things better. He and they didn't have much success. A more careful analysis
of what causes difficulty and how to fix it is needed.
vi. The problem with file systems and any other tree structures is that tree
structures aren't memorable. Someone else's tree structure, e.g. a telephone
keypad tree, is often helpful the first time you use it, but it is a pain to go through
the tree again and again to reach a particular leaf.
vii. I couldn't figure out what Cybersphere was supposed to mean except that it's
grand, and I see that the other commentators didn't either. Computers haven't
changed people's lives to the extent that telephones, radio, automobiles and air
travel did early in the previous century. Paul Krugman is eloquent on this point in
the NY Times for 2000 June 18. Human level artificial intelligence would
revolutionize human life, but fewer people in AI are working in that direction than in
the 1970s. Erik Mueller documents one aspect of this neglect in his 1999 article
http://www.media.mit.edu/~mueller/papers/storyund.html.
viii. I think the idea of doing an Amazon search for a book on your own computer is
a bad one, because the computations are trivial, whereas the file accesses to the
Amazon database are substantial. To do it on your own computer would require
downloading the whole Amazon catalog before you started your search.
ix. Re item 21 thru 26, I don't think changing "desktop" to "information landscape"
would have made much difference. The problem of what you can do with a small
screen will remain as long as we have small screens. Two foot by 3 foot flat screens
with 200 bit per inch resolution will change computer use much more than another
factor of 100 in processor speed. We also need the bathtub screen, the beach screen
and the bed screen.
x. item 32. Directories reaching out for files is vague and suggests more AI than is
currently available.
xi. There's something in "streams of time", but it's vague. One thing that is feasible
is for an operating system to make a journal including all the user's key strokes and
mouse clicks and identifiable more substantial operations. The journal should be
available for the user to inspect, replay bits of, and to offer for expert inspection
when something has gone wrong.
xii. I don't understand to the objection to names; they were invented long before
computers. In item 37, Natasha and Fifth Avenue are names.
xiii. item 41. "To send email, you put a document on someone else's stream." That
suggests that the recipient would read it right away or at least at a time determined
by the sender. Present email sits till you get around to it, and that's better.
xiv. Paper will be needed until screens are better. I use paper just as Gelernter
suggests. Print the document for reading and then throw it away. I'll do that even at
the cost of losing the pretty red ink I've put on my printout of the Manifesto.
JOHN McCARTHY is Professor of Computer Science at Stanford University. A pioneer
in artificial intelligence, McCarthy invented LISP, the preeminent AI programming
language, and frst proposed general-purpose time sharing of computers.

Responses to V.S. Ramachandran's "Mirror Neurons and imitation learning as the


driving force behind "the great leap forward" in human evolution" Marc Hauser,
Milford Wolpoff, V.S. Ramachandran, and Nicholas Humphrey

From: Marc D. Hauser


Date: May 31, 2000
I would like to respond to a few of the issues raised by Rama's essay on mirror
neurons. I don't disagree at all about the importance of mirror neurons, but I do
disagree with some of the points that Rama makes about evolution, primates,
language, and the interface between brain and behavior. I pick on these points as
they appear.
Point-1: "1) The hominid brain reached almost its present
intellectual capacity about 250,000 years ago."
MDH: What is the basis for this date? What is meant by "intellectual capacity"? This
sounds like the tired old argument from anthropology and other disciplines that the
emergence of sophisticated tools, controlled fire, and so on represents the kind of
fossilized evidence of intelligence that is most meaningful. I think a more carefully
reasoned argument than this is necessary.
Point-2: "3) Why the sudden explosion (often called the "great leap") in
technological sophistication, widespread cave art, clothes, stereotyped dwellings,
etc. around 40 thousand years ago, even though the brain had achieved its present
"modern" size almost a million years earlier?"
MDH: Why fall in to the pitfall of equating intellectual capacity, creativity and so on
with brain size? I think much of the field has gone beyond this, and certainly, Rama
should be familiar with Deacon's excellent points on the difficulty of disentangling
selection on brain size as opposed to body size. See the "Chihuahua fallacy."
Point-3: "4) Did language appear completely out of the blue as suggested by
Chomsky? Or did it evolve from a more primitive gestural language that was already
in place?"
MDH: Why is the distinction between language arising out of nothing, and evolving
from gestural systems? Why not explore the vocal communication of other animals,
as many of us have done. Thus, given that no human culture has ever evolved a
non-vocal language as its primary means of communication, it seems odd to think
that our language evolved from a gestural system. Moreover, the best evidence to
date on language-like forms of communication in animals come from their
vocalizations, not their gestural systems. See my two books The Evolution of
Communication and Wild Minds.
Point-4: "5) Humans are often called the "Machiavellian Primate" referring to our
ability to "read minds" in order to predict other peoples' behavior and outsmart
them. Why are apes and humans so good at reading other individuals' intentions?"
MDH: What? Apes reading others' intentions? Not so at all. In fact, there is almost
no evidence that apes can read the intentions of others, except for a very recent
paper by Hare, Tomasello and Call (2000, "Animal Behaviour"). All of the studies to
date suggest that apes lack a theory of mind. See Tomasello and Call's Primate
Cognition and my Wild Minds.
Point-5: "Do higher primates have a specialized brain center or module for
generating a 'theory of other minds' as proposed by Nick Humphrey and Simon
Baron-Cohen?"
MDH: Humphrey and Baron-Cohen are not responsible for the notion of theory of
mind. This goes back to David Premack and Dan Dennett.
Point-6: "The problem is that the human vocal apparatus is vastly more
sophisticated than that of any ape but without the correspondingly sophisticated
language areas in the brain the vocal equipment alone would be useless. So how did

these two mechanisms with so many sophisticated interlocking parts evolve in


tandem? Following Darwin's lead I suggest that our vocal equipment and our
remarkable ability to modulate voice evolved mainly for producing emotional calls
and musical sounds during courtship ("croonin a toon"). Once that evolved then the
brain - especially the left hemisphere - could evolve language."
MDH: Several problems here. First, the importance of the vocal apparatus and
underlying neural structure is not a new one, and is best attributed to Phil
Lieberman. Second, because language is not really about the sound structure per se
- sign language is an equally good natural language unless communication in dense
vegetation is of the essence - a focus on sound and vocal mechanisms per se is
probably misguided. Third, although the human vocal tract is different from the
vocal tract of other animals, more "sophisticated" is the wrong classificatory system.
The vocal maneuvers of a bird or a bat are extremely complicated, and we can't
come close to imitating their sounds. Moreover, many of the early claims concerning
the lack of articulatory abilities in primates are simply wrong, even though
nonhuman primates can't produce many of the sounds of human speech. Finally, the
argument that language somehow emerged from emotional calls seems really quite
impossible since the structure and function of these calls have so few of the crucial
properties of natural language: no reference, no syntax, no decomposable discrete
elements that can be recombined.
Point-6: "Mirror neurons can also enable you to imitate the movements of others
thereby setting the stage for the complex Lamarckian or cultural inheritance that
characterizes our species and liberates us from the constraints of a purely gene
based evolution. Moreover, as Rizzolati has noted, these neurons may also enable
you to mime - and possibly understand - the lip and tongue movements of others
which, in turn, could provide the opportunity for language to evolve. (This is why,
when you stick your tongue out at a new born baby it will reciprocate! How ironic
and poignant that this little gesture encapsulates a half a million years of primate
brain evolution.) Once you have these two abilities in place the ability to read
someone's intentions and the ability to mime their vocalizations then you have set in
motion the evolution of language. You need no longer speak of a unique language
organ and the problem doesn't seem quite so mysterious any more."
MDH: This is all fine and good, but there is a puzzle that Rama fails to address:
Although mirror neurons were first discovered in macaques, and have been
implicated as crucial in imitation and theory of mind, there is not a shred of
evidence for imitation or theory of mind in macaques. Thus, from a functional
perspective, what is this circuitry doing for a macaque? It is certainly not what
Rama has suggested for humans.
Point-7: "These arguments do not in any way negate the idea that there are
specialized brain areas for language in humans. We are dealing, here, with the
question of how such areas may have evolved, not whether they exist or not."
MDH: Because of the comment in point 6, the evolutionary problem is even more
challenging. How do you go from a set of circuits in macaques that may guide motor
actions, and perceptions of them, to implementing such circuits in the service of
much more complicated cognitive acrobatics: imitation and mind reading? Moreover,
if you are going to make the evolutionary point, it is important to articulate the
selective forces that may have led to such cognitive changes.
Point-8: "I suggest that the so-called big bang occurred because certain critical
environmental triggers acted on a brain that had already become big for some other
reason and was therefore "pre-adapted" for those cultural innovations that make us
uniquely human. (One of the key pre adaptations being mirror neurons.) Inventions
like tool use, art, math and even aspects of language may have been invented
"accidentally" in one place and then spread very quickly given the human brain's
amazing capacity for imitation learning and mind reading using mirror neurons.
Perhaps ANY major "innovation" happens because of a fortuitous coincidence of

environmental circumstances - usually at a single place and time. But given our
species' remarkable propensity for miming, such an invention would tend to spread
very quickly through the population - once it emerged."
MDH: This idea is unfortunately not new at all. Many people have argued for the
importance of imitation in human evolution, arguing that it has had cataclysmic
effects in all sorts of domains. Both Merlin Donald and Michael Tomasello make this
point quite eloquently, although they do not make any appeals to mirror neurons.
Point-9: "Thus I regard Rizzolati's discovery - and my purely speculative conjectures
on their key role in our evolution - as the most important unreported story of the
last decade."
MDH: I have no problem with the point that mirror neurons represent a key finding.
As noted above, I do have several problems with Rama's claims, both in terms of
their factual correctness, and their originality.
MARC D. HAUSER is an evolutionary psychologist, and a professor at Harvard
University where he is a fellow of the Mind, Brain, and Behavior Program. He is a
professor in the departments of Anthropology and Psychology, as well as the
Program in Neurosciences. He is the author of The Evolution of Communication, and
Wild Minds: What Animals Think.

From: Milford H. Wolpoff


Date: June 1, 2000
MHW: I wouldn't know where to start with this, but please consider the following:
Marc Hauser's Point-1: 1) "The hominid brain reached almost its present size - and
perhaps even its present intellectual capacity about 250,000 years ago . " What is
the basis for this date? What is meant by ":intellectual capacity"? This sounds like
the tired old argument from anthropology and other disciplines that the emergence
of sophisticated tools, controlled fire, and so on represents the kind of fossilized
evidence of intelligence that is most meaningful. I think a more carefully reasoned
argument than this is necessary."
MHW: The evidence for this is quite good. Brain size has ben within the modern
range, that is 2 sigma around the mean, for at least the last half million years,
meaning that the differences are less than populational differences today, which
cannot be meaningfully interpreted behaviorally. The widespread prepared core
technique suggests complex rule systems by the 250,000 date, and the broad
human adaptive pattern and markedly expanded range of archaeological sites,
including glaciated areas, suggests the same. Burials are soon thereafter, and it is
not a " tired old argument from anthropology" that supports this, but facts. What is
tired and old is dismissing the abundant evidence for human prehistory and
evolution for a snazzier theory based on "a more carefully reasoned argument".
Fossils and archaeological remains are the direct evidence we have, and here we are
luck because other species do not fossilize any remnants of their behavior.
Marc Hauser's Point-2: 3) "Why the sudden explosion (often called the "great leap" )
in technological sophistication, widespread cave art, clothes, stereotyped dwellings,
etc. around 40 thousand years ago, even though the brain had achieved its present
"modern" size almost a million years earlier?"
MHW: I'd suggest a title for this - the myths of human evolution - if it hadn't been
used already. Parietal art in Europe is not this old, rock art is much older in Australia
and Southern Africa. What about the "sudden explosion" of water craft in SE Asia
700,000 years ago when Flores was colonized, the sculpting in the Levant at
250,000, etc. This "explosion" is a Eurocentric interpretation of a much more
complex and interesting history of human artistic and technological endeavors.

Why fall in to the pitfall of equating intellectual capacity, creativity and so on with
brain size?"
MHW: Because we have very large brains and other primate species have much
smaller ones? Because the brain is the seat of the intellectual capacity and
creativity? Because no other credible explanation has been advanced for over 100
years?
I think much of the field has gone beyondthis, and certainly, Rama should be
familiar with Deacon's excellent points on the difficulty of disentangling selection on
brain size as opposed to body size. See the "Chihuahua fallacy".
MHW: Perhaps so, but the field has evidently not gone beyond missing the forest for
the trees.
Hauser Point-3: 4)"Did language appear completely out of the blue as suggested by
Chomsky? Or did it evolve from a more primitive gestural language that was already
in place?"
MHW: yes
Hauser Point: "Why is the distinction between language arising out of nothing, and
evolving from gestural systems? Why not explore the vocal communication of other
animals, as many of us have done. "
MHW: is there yet a credible link between these and human language? Much
evidence indicates that if human language has any links to primate communication
systems, they are to gestural and ant vocal communications. But this, of course,
comes from comparing living species to each other and not to ancestors.
Hauser Point: "Thus, given that no human culture has ever evolved a non vocal
language as its primary means of communication, it seems odd to think that our
language evolved from a gestural system. "
MHW: This makes no sense. "Evolved," of course, means changed, so how can an
evolutionary argument be held to the criterion of not changing?-Moreover, the best evidence to date on language-like forms of communication in
animals come from their vocalizations, not their gestural systems. See my two
books "The Evolution of Communication " and "Wild Minds".
MHW: Sure, but we did not evolve from "animals", but most directly from a common
ancestor with chimpanzees, which gives us a clue about where to look.
Hauser Point-4: 5) Humans are often called the "Machiavellian Primate" referring to
our ability to "read minds" in order to predict other peoples' behavior and outsmart
them. Why are apes and humans so good at reading other individuals' intentions?
MHW: What? Apes reading others' intentions? Not so at all. In fact, there is almost
no evidence that apes can read the intentions of others, except for a very recent
paper by Hare, Tomasello and Call (2000, "Animal Behaviour"). All of the studies to
date suggest that apes lack a theory of mind. See Tomasello and Call's Primate
Cognition and my Wild Minds.
Hauser Point-5: Do higher primates have a specialized brain center or module for
generating a "theory of other minds" as proposed by Nick Humphrey and Simon
Baron-Cohen?
MHW: Humphrey and Baron-Cohen are not responsible for the notion of theory of
mind. This goes back to David Premack and Dan Dennett.

Hauser Point-6: "The problem is that the human vocal apparatus is vastly more
sophisticated than that of any ape but without the correspondingly sophisticated
language areas in the brain the vocal equipment alone would be useless. So how did
these two mechanisms with so many sophisticated interlocking parts evolve in
tandem? Following Darwin's lead I suggest that our vocal equipment and our
remarkable ability to modulate voice evolved mainly for producing emotional calls
and musical sounds during courtship ("croonin a toon."). Once that evolved then the
brain - especially the left hemisphere - could evolve language."
MHW: and to think that when Frank Livingstone published a paper in 1962 entitled
"could australopithecines sing", it was met with peals of laughter.
MILFORD H. WOLPOFF is Professor of Anthropology and Adjunct Associate Research
Scientist, Museum of Anthropology at the University of Michigan. His work and
theories on a "multiregional" model of human development challenge the popular
"Eve" theory.He is the author (with Rachel Caspari) of Race and Human Evolution: A
Fatal Attraction

From: V.S. Ramachandran


Date: June 1, 2000
RESPONSE TO MARC HAUSER'S COMMENTS
Milford Wolpoff has done an adequate job in refuting the various purported "
criticisms" of my essay raised by Marc Hauser . But here are my own reactions to
Hauser. For what it is worth.
First, Hauser seems not to understand the purpose of the Edge website. He says the
ideas in the essay (or at least some of them) are not " original" but I wasn't even
trying to be original. The purpose of this website is to provide a platform for
exchange of ideas and my goal was to be provocative - not original. Judging from
the arguments I have already generated between Wolpoff and Hauser, I appear to
have succeeded in doing this. (Needless to say I agree with Wolpoff! ) Secondly,
John Brockman's invitation to me was to report on "The most important unreported
story" - not on my story, but on any story The choice of someone else's work Rizzolati's - was quite deliberate, because its significance is not widely appreciated,
except by experts in the field. (And not even by all "experts.")
But having said that let me add that, despite Hauser's comment, there are many
points in my essay that are original, e.g. our work on anosognosia patients denying
the paralysis of other patients or on MU wave suppression that occurs while you
watch another persons movements. Also the point I make about the analogy
between the "second big bang" in human culture (following the industrial/scientific
revolution) and the so-called "big bang" of 40,000 years ago has, to my knowledge,
not been made before. The argument is: we know that there could have been no
genetic change in the brain corresponding to the second big bang, so why do so
many paleoanthropologists feel the compelling need to invoke one for the first?
I turn now to some of the other issues. Hauser's remarks suggest that he hasn't
read my article carefully. (Since he appears not to understand the ideas or, in some
cases, simply repeats what I say but pretends to disagree.)
1) Brain size. I certainly don't think there is a direct and simple correlation between
brain size and intelligence. I was setting up this argument merely as a "straw man,"
as a rhetorical device, and if Hauser had read on further he would have realized this.
Indeed my essay concludes by saying that it isn't size but circuitry that is critical,
specifically the circuitry in the ventral premotor area where the mirror neurons are.
Thus Hauser is actually agreeing with me although he pretends not to.
Secondly Hauser says ( under his point no. 1 ) "What is meant by intellectual

capacity ? This sounds like the tired old argument from anthropology and other
disciplines that the emergence of sophisticated tools, controlled fire, and so on
represents a kind of fossilized evidence of intelligence" If sophisticated tools, fire,
shelters, woven clothing etc are not evidence of intelligence, then what IS? Perhaps
Hauser would prefer that we went back in a time machine to visit early hominids to
administer " I Q tests" of the kind popularized by his former colleague - the late Dick
Herrenstein? Here I am in complete agreement with Wolpoff that cognitive
psychologists should start paying attention to the evidence from paleoanthropology.
2) Hauser asks: Monkeys have mirror neurons so why don't they have an elaborate
culture like us? Again if he had bothered to read the essay he will seen that I raise
the very same question twice in my article. Hauser's confusion stems from a failure
to distinguish necessary and sufficient conditions. I argue in my essay that the
mirror neuron system - and its subsequent elaboration in hominids- may have been
necessary but not sufficient . But it may have been a decisive step. Hauser appears
not to understand this idea. .
3) Theory of other minds.. Hauser categorically states that apes " do not have a
theory of other minds". He should read the elegant work of Povinelli. I would agree
with Hauser, though, that it would be nice to see clearer proof of the kind I am
accustomed to in my own field (visual psychophysics) But as I said above (2) even if
apes did not have a theory of other minds, this wouldn't vitiate my main argument.
Perhaps mirror neurons are necessary, but they may not be sufficient for generating
a theory of other minds.
4) Priority: Hauser says that the idea of a specialized mechanism in humans (and
perhaps apes) for reading other minds came from David Premack and Dan Dennett
not from Nick Humphrey or Simon Baron - Cohen. Hauser may be right about this- I
am not sure. Dennett is a sophisticated and original thinker and he may very well
have thought of it .The earliest Humphrey reference I can think of is 1977 at a
symposium I organized in Cambridge, UK (published) Can Hauser provide an earlier
Dennett reference? And I am aware of Premark's ingenious experiments but did he
explicitly state that there may be a specialized mechanism for reading other minds?
In any event my essay was an entry for a website chat room - not for a stuffy
journal like psych review. (If it had been the latter I would have been more diligent
with citations and issues of priority) There are dozens of others whom I could have
cited. (Including Hauser's own interesting work : perhaps he is peeved that I didn't
cite him) but that would have been beyond the scope of such a short essay.
5) Hauser argues that my my remarks about the important role of culture in
evolution are " not new". Again I wasn't pretending it was new .. of course it isnt
new, its been made a thousand times.(most recently and eloquently by Merlin
Donald) What's new is the link with a specific mechanism - mirror neurons (Or at
least, this point isn't widely appreciated.. and in that sense it satisfies the
requirements of John Brockman's original question " what's the single most
unreported story") 6) Hauser says " The evolutionary problem is even more
challenging. How do you go from a set of circuits in macaques that may guide motor
actions, and perceptions of them, to implementing such circuits in the service of
much more complicated cognitive acrobatics: imitation and mind reading?" Here,at
last, is a good point from Hauser and I would agree with him.. indeed its a point
that everyone, including Rizzolati - is perfectly aware of. But I would argue that
mirror neurons provide an experimental lever for addressing these issues empirically
instead of just speculating about how it might have happened.
7) Hauser argues "Finally, (Ramachandran's) argument that language somehow
emerged from emotional calls seems really quite impossible since the structure and
function of these calls have so few of the crucial properties of natural language: no
reference, no syntax, no decomposable discrete elements that can be recombined."
Here again Hauser has missed my point. I argued that it was initially the need for
modulating the voice for emotional calls (and perhaps singing) that exerted the
selection pressure for the development of sophisticated vocal apparatus (and neural

networks). But once these mechanisms for subtle voice modulations were in place
they provided a preadaptation - an opportunity - for language to evolve. Contrary to
Hauser's remark I certainly wasn't saying that "language evolved from emotional
calls." That would be ludicrous.
8) Hauser says "The vocal maneuvers of a bird or a bat are extremely complicated,
and we can't come close to imitating their sounds". Again Hauser confuses
necessary and sufficient conditions. The emergence of vocal sophistication may have
been necessary for language evolution (as I point out) but certainly not sufficient
(parrots don't have language!)
In summary, I suggest Hauser read my essay again and also read Wolpoff's
refutation of the many points he raises. But I thank him for his response, for it
raises many interesting and fascinating issues that need to be widely discussed.
Or perhaps we would all be better off following the advice given by the French
Anthropological Society in the 19th century and banning all ideas about the
evolution of language! (That's why I tried to emphasize culture in my essay rather
than language per se.)
V.S. RAMACHANDRAN, M.D., PH.D., is professor and director of the Center for Brain
and Cognition, University of California, San Diego, and is adjunct professor at the
Salk Institute for Biological Studies, La Jolla, California. He is the author (with
Sandra Blakeslee) of Phantoms in the Brain: Probing the Mysteries of the Human
Mind.

From: Nicholas Humphrey


Date: June 1, 2000
A FOOTNOTE TO THE HAUSER-RAMACHANDRAN EXCHANGE
I am not generally one to bother about reputation, but Marc Hauser's gratuitous
put-down of my contribution to the theory of mind debate, prompts me to sound a
note on my own behalf. Hauser "corrects" Ramachandran for suggesting that I was
partly responsible for the idea that "higher primates have a specialized brain center
or module for generating a 'theory of other minds'". Instead, he says, "this goes
back to David Premack and Dan Dennett." However Ramachandran's scholarship on
this score is actually rather better than Hauser's (as it might well be, since
Ramachandran himself was in at the beginning).
There were of course important precursors, but the notion that the capacity to
theorize about other minds is an evolved specialism, dependent on a new kind of
cognitive architecture, was in fact first proposed by me in my Lister Lecture to the
British Association for the Advancement of Science in 1977, and developed at a
conference in 1978 organized by Ramachandran himself and Brian Josephson. The
earliest published version appeared as "Nature's Psychologists," (New Scientist, 29
June 1978), and a longer version with the same title appeared in Josephson and
Ramachandran's edited book, Consciousness and the Physical World (Pergamon,
1980).
Premack's famous paper "Do chimpanzees have a theory of mind" also appeared in
1978. It's true that in my own paper I did not use the phrase "theory of mind".
Instead I wrote about how a "natural psychologist" has to develop a "conceptual
model of how the mind works", based on an intuitive grasp of the "intervening
variables and causal structure." However, the basic idea is just the same. What's
more I went on to propose that in order to develop this kind of intuitive grasp, a
newly evolved cognitive skill would be required. "The trick which nature came up
with was introspection: it proved possible for an individual to develop a model of the
behavior of others by reasoning by analogy from his own case, the facts of his own
case being revealed to him by 'examination of the contents of consciousness'."

Dennett's ideas about higher order intentional systems were being developed,
independently, around the same time.
NICHOLAS HUMPHREY is a theoretical psychologist at the Centre for Philosophy of
Natural and Social Sciences, London School of Economics, and the author of
Consciousness Regained, The Inner Eye, A History of the Mind, and Leaps of Faith:
Science, Miracles, and the Search for Supernatural Consulation.

From: Marc D. Hauser


Date: June 5, 2000
Thanks to both Rama and Nick for their replies. In order to quell any further claims
of X not understanding Y, let me simply make a few points. I was not making a
blanket claim that Edge should be a forum for only original points. Not at all. This
would completely defeat the purpose of such a digital salon. I was making specific
comments about specific points. I was also not saying that Rama hasn't made many
important original comments, and findings. His own work is some of the most
profound around and I cite it all the time (note: I don't care about the lack of
citation to my own work; that wasn't the point!).
Second, I wasn't saying that fire, tools, etc. are not important in thinking about the
evolution of human culture, nor that these are not indices of human intellectual
capacity. Rather, what I was pointing to is the fact that is commonly assumed that
because these are such extraordinary achievements, that they must be evidence
that such humans had language. But the connection between language and such
abilities is never explicitly articulated. I don't have an argument to make here, but I
am very much against claims that simply invoke language without saying, first, what
it is about language that makes such cognitive abilities possible, and second,
articulating how it happened.
Third, Rama suggests I read Povinelli. Uggh. Rama should read Povinelli and see all
the critiques that have emerged. For example, in Povinelli's original experiment
using the knower-guesser procedure, he claimed that chimps, but not rhesus have a
theory of mind because they can recognize ignorance. However, a careful analysis of
his data (i..e, as opposed to his interpretation) revealed (see C. Heyes, 1998, BBS
for one critique; there were many others) that not only did the chimps take
hundreds of trials to discriminate between knower and guesser (i.e., no theory of
mind at all), but in the key transfer test, the chimps failed as well. So, nothing at all
in Povinelli provides evidence of theory of mind, and in fact, if Rama had read recent
Povinelli, he too would see that Povinelli himself claims that chimps lack this ability;
so does Mike Tomasello, another exceptional researcher in this field.
While on the topic, I also did not mean to slight Nick Humphreys. I have long been
an admirer! Given lectures and research that is not published, I think what I was
trying to point out is that David Premack's chimp experiments were conducted well
before the BBS publication and Premack made a big deal of this as a specialized
mechanism. Of course Premack argued that the chimps did have a theory of mind,
but in this particular experiment at least, the same problems arise as those in
Povinelli. The chimps don't spontaneously assign the correct mental states to
humans as actors.
Finally, I didn't miss the point at all about emotional calls and language. I got it. And
I don't agree with it. The way in which we modulate our voice for emotional calls is
not sophisticated at all. It doesn't require the rapid bit rate that is critical in speech,
a point made by Lieberman many years ago. In any case, I have very much enjoyed
this. This is, after all, what a salon is all about!

John Brockman, Editor and Publisher | Kip Parent, Founding Webmaster


Copyright 2000 by Edge Foundation, Inc
Back to EDGE INDEX
| Home | Edge Features Archive | Edge In The News | Third Culture | Digerati |
About Edge |

You might also like