Professional Documents
Culture Documents
| Home | Edge Features Archive | Edge Index | Edge In The News | Third Culture |
Digerati | About Edge |
Introduction by
John Brockman
David Gelernter .....
"...prophesied the rise of the World Wide Web. He understood the idea half a decade
before it happened." (John Markoff)
"...is a treasure in the world of computer science...the most articulate and
thoughtful of the great living practitioners" (Jaron Lanier)
"...is one of the pioneers in getting many computers to work together and cooperate
on solving a single problem, which is the future of computing." (Danny Hillis)
"...is one of the most brilliant and visionary computer scientists of our time." (Bill
Joy)
Yale computer scientist David Gelernter entered the public mind one morning in
January '92 when The New York Sunday Times ran his picture on the front page of
the business section; it filled nearly the whole page. The text of the accompanying
story occupied almost another whole page inside.
In 1991 Gelernter had published a book for technologists (an extended research
paper) called Mirror Worlds, claiming in effect that one day, there would be
something like the Web. As well as forecasting the Web, the book, according to the
people who built these systems, also helped lay the basis for the internet
programming language "Java" and Sun Microsystems' "Jini."
Gelernter's earlier work on his parallel programming language "Linda" (which allows
you to distribute a computer program across a multitude of processors and thus
break down problems into a multitude of parts in order to solve them more quickly)
and "tuple spaces" underlies such modern-day systems as Sun's JavaSpaces, IBM's
T-Spaces, a Lucent company's new "InfernoSpaces" and many other descendants
worldwide.
By mid-'92 this set of ideas had taken hold and was exerting a strong influence . By
1993 the Internet was growing fast, and the Web was about to be launched.
Gelernter's research group at Yale was an acknowledged world leader in network
software and more important, it was known for "The Vision Thing", for the big
picture.
In June '93 everything stopped for Gelernter when he was critically injured by a
terrorist mailbomb. He was out of action for the rest of '93 and most of '94 as the
Web took off, the Internet become an international phenomenon and his aggressive
forecasts started to come true. Gelernter endured numerous surgeries through 95,
and then a long recuperation period.
Now Gelernter is back. In this audacious manifesto, "The Second Coming", he
writes: "Everything is up for grabs. Everything will change. There is a magnificent
sweep of intellectual landscape right in front of us.""
JB
He is the author of Mirror Worlds (1991), The Muse In The Machine (1994), 1939:
The Lost World Of The Fair (1995), And Drawiing A Life: Surviving The Unabomber
(1998).
Click here for David Gelernter's Edge Bio Page
falling prices, computers for everybody. Theme of the Second Age now approaching:
computing transcends computers. Information travels through a sea of anonymous,
interchangeable computers like a breeze through tall grass. A dekstop computer is a
scooped-out hole in the beach where information from the Cybersphere wells up like
seawater.
7. "The network is the computer" yes; but we're less interested in computers all
the time. The real topic in astronomy is the cosmos, not telescopes. The real topic in
computing is the Cybersphere and the cyberstructures in it, not the computers we
use as telescopes and tuners.
8. The software systems we depend on most today are operating systems (Unix, the
Macintosh OS, Windows et. al.) and browsers (Internet Explorer, Netscape
Communicator...). Operating systems are connectors that fasten users to
computers; they attach to the computer at one end, the user at the other. Browsers
fasten users to remote computers, to "servers" on the internet.
Today's operating systems and browsers are obsolete because people no longer
want to be connected to computers near ones OR remote ones. (They probably
never did). They want to be connected to information. In the future, people are
connected to cyberbodies; cyberbodies drift in the computational cosmos also
known as the Swarm, the Cybersphere.
From The Prim Pristine Net To The Omnipresent Swarm
9. The computing future is based on "cyberbodies" self-contained, neatly-ordered,
beautifully-laid-out collections of information, like immaculate giant gardens.
10. You will walk up to any "tuner" (a computer at home, work or the supermarket,
or a TV, a telephone, any kind of electronic device) and slip in a "calling card," which
identifes a cyberbody. The tuner tunes it in. The cyberbody arrives and settles in like
a bluebird perching on a branch.
11. Your whole electronic life will be stored in a cyberbody. You can summon it to
any tuner at any time.
12. By slipping it your calling card, you customize any electronic device you touch;
for as long as it holds your card, the machine knows your habits and preferences
better than you know them yourself.
13. Any well-designed next-generation electronic gadget will come with a ``Disable
Omniscience'' button.
14. The important challenge in computing today is to spend computing power, not
horde it.
16. The future is dense with computers. They will hang around everywhere in lush
growths like Spanish moss. They will swarm like locusts. But a swarm is not merely
a big crowd. The individuals in the swarm lose their identities. The computers that
make up this global swarm will blend together into the seamless substance of the
Cybersphere. Within the swarm, individual computers will be as anonymous as
molecules of air.
17. A cyberbody can be replicated or distributed over many computers; can inhabit
many computers at the same time. If the Cybersphere's computers are tiles in a
paved courtyard, a cyberbody is a cloud's drifting shadow covering many tiles
simultaneously.
18. But the Net will change radically before it dies. When you deal with a remote
web site, you largely bypass the power of your desktop in favor of the far-off power
of a web server. Using your powerful desktop computer as a mere channel to reach
web sites, reaching through and beyond it instead of using it, is like renting a
Hyundai and keeing your Porsche in the garage. Like executing programs out of disk
storage instead of main memory and cache. The Web makes the desktop impotent.
19. The power of desktop machines is a magnet that will reverse today's "everything
onto the Web!" trend. Desktop power will inevitably drag information out of remote
servers onto desktops.
20. If a million people use a Web site simultaneously, doesn't that mean that we
must have a heavy-duty remote server to keep them all happy? No; we could move
the site onto a million desktops and use the internet for coordination. The "site" is
like a military unit in the field, the general moving with his troops (or like a hockey
team in constant swarming motion). (We used essentially this technique to build the
first tuple space implementations. They seemed to depend on a shared server, but
the server was an illusion; there was no server, just a swarm of clients.) Could
Amazon.com be an itinerant horde instead of a fixed Central Command Post? Yes.
Stranger Than Fiction: Computers Today
21. The windows-menus-mouse "desktop" interface, invented by Xerox and Apple
and now universal, was a brilliant invention and is now obsolete. It wastes screenspace on meaningless images, fails to provide adequate clues to what is inside the
files represented by those blurry little images, forces users to choose icons for the
desktop when the system could choose them better itself, and keeps users jockeying
windows (like parking attendants rearranging cars in a pint-sized Manhattan lot) in a
losing battle for an unimpeded view of the workspace which is, ultimately,
unattainable. No such unimpeded view exists.
22. Icons and "collapsed views" seem new but we have met them before. Any book
has a "collapsed" or "iconified" view, namely its spine. An icon conveys far less
information that the average book spine and is much smaller. should it be much
smaller? Might a horizontal stack of "book spines" onscreen be more useful than a
clutter of icons?
23. The computer mouse was a brilliant invention, but we can see today that it is a
bad design. Like any device that must be moved and placed precisely, it ought to
provide tactile feedback; it doesn't.
24. Metaphors have a profound effect on computing. The desktop metaphor traps us
in a "broad" instead of "deep" arrangement of information that is fundamentally
wrong for computer screens. Compared to a standard page of words, an actual
desktop is big and a computer screen is small. A desktop is easily extended (use
drawers, other desks, tables, the floor); a computer screen is not.
25. Apple could have described its interface as a pure "information landscape," with
no connection to a desktop; we invented this landscape (they might have explained)
the way a landscape architect or amusement park designer invents a landscape. We
invented an ideal space for seeing and managing computerized information. Our
landscape is imaginary, but you can still enter and move around it. The computer
screen is the window of your vehicle, the face-shield of your diving-helmet.
26. Under the desktop metaphor, the screen IS the interface the interface is a
square foot or two of glowing colors on a glass panel. In the landscape metaphor,
the screen is just a viewing pane. When you look through it, you see the actual
interface lying beyond.
Problems On The Surface And Under The Surface
27. Modern computing is based on an analogy between computers and file cabinets
that is fundamentally wrong and affects nearly every move we make. (We store
"files" on disks, write "records," organize files into "folders" file-cabinet
language.) Computers are fundamentally unlike file cabinets because they can take
action.
28. Metaphors have a profound effect on computing: the file-cabinet metaphor traps
us in a "passive" instead of "active" view of information management that is
fundamentally wrong for computers.
29. The rigid file and directory system you are stuck with on your Mac or PC was
designed by programmers for programmers and is still a good system for
programmers. It is no good for non-programmers. It never was, and was never
intended to be.
30. If you have three pet dogs, give them names. If you have 10,000 head of cattle,
don't bother. Nowadays the idea of giving a name to every file on your computer is
ridiculous.
31. Our standard policy on file names has far-reaching consequences: doesn't
merely force us to make up names where no name is called for; also imposes strong
limits on our handling of an important class of documents ones that arrive from
the outside world. A newly-arrived email message (for example) can't stand on its
own as a separate document can't show up alongside other files in searches, sit
by itself on the desktop, be opened or printed independently; it has no name, so it
must be buried on arrival inside some existing file (the mail file) that does have a
name. The same holds for incoming photos and faxes, Web bookmarks, scanned
images...
32. You shouldn't have to put files in directories. The directories should reach out
and take them. If a file belongs in six directories, all six should reach out and grab it
automatically, simultaneously.
33. A file should be allowed to have no name, one name or many names. Many files
should be allowed to share one name. A file should be allowed to be in no directory,
one directory, or many directories. Many files should be allowed to share one
directory. Of these eight possibilities, only three are legal and the other five are
banned for no good reason.
Streams Of Time
34. In the beginning, computers dealt mainly in numbers and words. Today they
deal mainly with pictures. In a new period now emerging, they will deal mainly with
tangible time time made visible and concrete. Chronologies and timelines tend to
be awkward in the off-computer world of paper, but they are natural online.
35. Computers make alphabetical order obsolete.
36. File cabinets and human minds are information-storage systems. We could
model computerized information-storage on the mind instead of the file cabinet if we
wanted to.
37. Elements stored in a mind do not have names and are not organized into
folders; are retrieved not by name or folder but by contents. (Hear a voice, think of
a face: you've retrieved a memory that contains the voice as one component.) You
can see everything in your memory from the standpoint of past, present and future.
Using a file cabinet, you classify information when you put it in; minds classify
information when it is taken out. (Yesterday afternoon at four you stood with
Natasha on Fifth Avenue in the rain as you might recall when you are thinking
about "Fifth Avenue," "rain," "Natasha" or many other things. But you attached no
such labels to the memory when you acquired it. The classification happened
retrospectively.)
38. A "lifestream" organizes information not as a file cabinet does but roughly as a
mind does.
39. A lifestream is a sequence of all kinds of documents all the electronic
documents, digital photos, applications, Web bookmarks, rolodex cards, email
messages and every other digital information chunk in your life arranged from
oldest to youngest, constantly growing as new documents arrive, easy to browse
and search, with a past, present and future, appearing on your screen as a receding
parade of index cards. Documents have no names and there are no directories; you
retrieve elements by content: "Fifth Avenue" yields a sub-stream of every document
that mentions Fifth Avenue.
40. A stream flows because time flows, and the stream is a concrete representation
of time. The "now" line divides past from future. If you have a meeting at 10AM
tomorow, you put a reminder document in the future of your stream, at 10AM
tomorrow. It flows steadily towards now. When now equals 10AM tomorrow, the
reminder leaps over the now line and flows into the past. When you look at the
future of your stream you see your plans and appointments, flowing steadily out of
the future into the present, then the past.
41. You manage a lifestream using two basic controls, put and focus, which
correspond roughly to acquiring a new memory and remembering an old one.
42. To send email, you put a document on someone else's stream. To add a note to
your calendar, you put a document in the future of your own stream. To continue
work on an old document, put a copy at the head of your stream. Sending email,
updating the calendar, opening a document are three instances of the same
operation (put a document on a stream).
43. A substream (for example the "Fifth Avenue" substream) is like a conventional
directory except that it builds itself, automatically; it traps new documents as
they arrive; one document can be in many substreams; and a substream has the
same structure as the main stream a past, present and future; steady flow.
In The Age Of Tangible Time
44. The point of lifestreams isn't to shift from one software structure to another but
to shift the whole premise of computerized information: to stop building glorified file
cabinets and start building (simplified, abstract) artificial minds; and to store our
electronic lives inside.
45. A lifestream can replace the desktop and subsume the functions of the file
system, email system and calendar system. You can store a movie, TV station,
virtual museum, electronic store, course of instruction at any level, electronic
auction or an institution's past, present and future (its archives, its current news
and its future plans) in a lifestream. Many websites will be organized as lifestreams.
46. The lifestream (or some other system with the same properties) will become the
most important information-organizing structure in computing because even a
rough imitation of the human mind is vastly more powerful than the most
sophisticated file cabinet ever conceived.
47. Lifestreams (in preliminary form) are a successful commercial product today, but
my predictions have nothing to do with this product. Ultimately the product may
succeed or fail. The idea will succeed.
Living Timestreams
48. Lifestreams today are conventional information structures, stored at web sites
and tuned-in using browsers. In the future they will be cyberbodies.
49. Today's operating systems connect users to computers. In the future we will
deal directly with information, in the form of cyberbodies. Operating systems will
connect cyberbodies to computers; will allow cyberbodies to dock on computers.
Users won't deal with operating systems any more, and won't care about them. Your
computer's operating system will make as much difference to you as the voltage
level of a bit in memory.
50. A lifestream is a landscape you can navigate or fly over at any level. Flying
towards the start of the stream is "time travel" into the past.
45. You can walk alongside a lifestream (browsing or searching) or you can jump in
and be immersed in information.
51. A well-designed store or public building allows you to size up the whole space
from outside, or as soon as you walk in you see immediately how things are laid
out and roughly how large and deep the space is. Today's typical web site is a failure
because it is opaque. You ought to be able to see immediately (not deduce or
calculate) how the site is arranged, how big it is, how deep and how broad. It ought
to be transparent. (For an example of a "transparent" web site, Mirror Worlds
figure 7.6.)
52. Movies, TV shows, virtual museums and all sorts of other cultural products from
symphonies to baseball games will be stored in lifestreams. In other words: each
cultural product will be delivered to you in the form of an artifical mind. You will deal
with it not as you deal with an object but roughly as you do with a person.
Institutions Afloat In The Cybersphere
53. Your car, your school, your company and yourself are all one-track vehicles
moving forward through time, and they will each leave a stream-shaped cyberbody
(like an aircraft's contrail) behind them as they go. These vapor-trails of crystallized
experience will represent our first concrete answer to a hard question: what is a
company, a university, any sort of ongoing organization or institution, if its staff and
customers and owners can all change, its buildings be bulldozed, its site relocated
what's left? What is it? The answer: a lifestream in cyberspace.
54. A software or service company equals the employees plus the company
lifestream. Every employee has his own view of the communal stream. The
company's web site is the publically-accessible substream of the main company
stream. The company's lifestream is an electronic approximation of the company's
memories, its communal mind.
50. Lifestreams don't yield the "paperless office." (The "paperless office" is a bad
idea because paper is one of the most useful and valuable media ever invented.) But
lifestreams can turn office paper into a temporary medium for use, not storage.
"On paper" is a good place for information you want to use; a bad place for
information you want to store. In the stream-based office, for each newly-created or
-received paper document: scan it into the stream and throw it away. When you
need a paper document: find it in the stream; print it out; use it; if you wrote on
the paper while using it, scan it back in; throw it ou
55. Software can solve hard problems in two ways: by algorithm or by making
connections by delivering the problem to exactly the right human problem-solver.
The second technique is just as powerful as the first, but so far we have ignored it.
The Second Coming Of The Computer
56. Lifestreams and microcosms are the two most important cyberbody types; they
relate to each other as a single musical line relates to a single chord. The stream is
a "moment in space," the microcosm a moment in time.
57. Nowadays we use a scanner to transfer a document's electronic image into a
computer. Soon, the scanner will become a Cybersphere port of entry, an allpurpose in-box. Put any object in the in-box and the system develops an accurate
3D physical transcription, and drops the transcription into the cool dark well of
cyberspace. So the Cybersphere starts to take on just a hint of the textural richness
of real life.
We'll know the system is working when a butterfly wanders into the in-box and (a
few wingbeats later) flutters out and in that brief interval the system has
transcribed the creature's appearance and analyzed its way of moving, and the real
butterfly leaves a shadow-butterfly behind. Some time soon afterward you'll be
examining some tedious electronic document and a cyber-butterfly will appear at
the bottom left corner of your screen (maybe a Hamearis lucina) and pause there,
briefly hiding the text (and showing its neatly-folded rusty-chocolate wings like
Victorian paisley, with orange eyespots) and moments later will have crossed the
screen and be gone.
But What Does It All Matter?
58. If you have plenty of money, the best consequence (so they say) is that you no
longer need to think about money. In the future we will have plenty of technology
and the best consequence will be that we will no longer have to think about
technology.
We will return with gratitude and relief to the topics that actually count.
would all be flying to work. The technology of cars has improved a bit since then,
but the basic experience of driving is almost exactly the same.
Jaron Lanier: This reminds of Marx's vision of what should happen after the
revolution. He imagined we'd be reading the classics and practicing archery!
Idealists always believe there's some more meaningful, less dreary plane of
existence that can be found in this life.
David Farber: We are at the edge of a real dramatic change in technology. For the
past decade we have evolved from a view that the network is just a way of
connecting computers together to the current view that the network is the action to
the view often stated (by me and others) that no one cares about the network but
only what they can access and interact with information and people.
Danny Hillis:David Gelernter is basically right: current generation computer
interfaces are not very good. (Since we are all among friends here, we can say it:
they suck).
Vinod Kholsa: Transition strategies here will significantly impact the end state.
John McCarthy: Unfortunately, the making of computer systems and software is
dominated by the ideology of the omnipotent programmer (or web site designer)
who knows how the user (regarded as a child) should think and reduces the user's
control to pointing and clicking. This ideology has left even the most sophisticated
users in a helpless position compared to where they were 40 years ago in the late
1950s.
of understanding how the evolution of the internet is going to change our lives.
Gelernter is ahead of us all in peering through the fog that we call the future of
technology.
DAVID DITZEL is CEO, Transmeta Corporation
that his vision raises is, whether we shall have the tools to make it real. Gelernter
disparages tools. He says, "The real topic in astronomy is the cosmos, not
telescopes. The real topic in computing is the cybersphere and the cyberstructures
in it, not the computers ... ''. I know more about astronomy than about computing. I
can certify that he has a one-sided view of astronomy. Modern astronomy is
dominated by tools. It is about telescopes and spacecraft as much as it is about the
cosmos that these tools explore. Every time we introduce a new tool, we see a new
cosmos. And I suspect that he has a one-sided view of computing. I suspect that
cyberspace will also be dominated by tools, as far into the future as we can imagine.
The topography of our future cyberspace will be determined more by new tools than
by Gelernter's vision. Still, he has pointed the way for the next generation of tool
builders to follow. We must hope that they will be more successful than the builders
of helicopters fifty years ago. If the tool-builders can build tools to match his vision,
then our children and grandchildren might see the Second Coming and live in the
world of Gelernter's dreams.
of philosophers, in the day-to-day world such problems remain scarce. There is,
however, a third sector to the computational universe: the realm of questions whose
answers are, in principle, computable, but that, in practice, we are unable to ask in
unambiguous language that computers can understand. This is where brains beat
computers. In the real world, most of the time, finding an answer is easier than
defining the question. It's easier to draw something that looks like a cat than to
describe what, exactly, makes something look like a cat. A child scribbles
indiscriminately, and eventually something appears that happens to resemble a cat.
A solution finds the problem, not the other way around. The world starts making
sense, and the meaningless scribbles are left behind. This is the power of that Mirror
World we now perceive as the Internet and the World Wide Web.
"An argument in favor of building a machine with initial randomness is that, if it is
large enough, it will contain every network that will ever be required," advised
cryptanalyst Irving J. Good, speaking at IBM in 1958. Even a relatively simple
network contains solutions, waiting to be discovered, to problems that need not be
explicitly defined. The network can and will answer questions that all the
programmers in the world would never have time to ask.
GEORGE DYSON is a leading authority in the field of Russian Aleut kayaks the
subject of his book Baidarka, numerous articles, and a segment of the PBS
television show Scientific American Frontiers. His early life and work was portrayed
in 1978 by Kenneth Brower in his classic dual biography, The Starship And The
Canoe. Now ranging more widely as a historian of technology, Dyson's most recent
book is Darwin Among The Machines.
wrong on the details of cyberbodies and his lifestreams. The first because as framed
it relies still on a physical icon to identify the body, and the second because it is just
one metaphor that many will find inconvenient. In the following paragraphs I'll
outline my own versions of what the revolution will bring in these two departments,
and no doubt my visions will be as wrong or more than David's.
But first the actuality of the revolution. David's criticisms of our current computing
environments are eloquently stated, and I think widely shared. A number of projects
were started about a year ago, originally through a DARPA sponsored `Computing
Expeditions' program. At CMU the expedition is called "Aura", at Berkely it is
"Endeavour" (named for Cook's ship, and hence the spelling), at the University of
Washington/Xerox Parc it is called "Portolano/Workscapes". At MIT, Michael
Dertouzos, Anant Agarwal and I are leading "Project Oxygen" dedicated to pervasive
human-centered computing. The common theme across all these projects is that
human time and attention is the limiting factor in the future, not computation speed,
bandwidth, or storage.
In the past the human has been forced to climb into the computer's world. First with
binary, and holes punched in cards, and then later by physically approaching that
"square foot or two of glowing colors on a glass panel", and being drawn into its
virtual desktop with metaphors bogged down by copies of physical constraints in
real offices. In MIT's Project Oxygen, a joint project of the Laboratory for Computer
Science and the Artificial Intelligence Lab, we are trying to drag the computer out
into the world of people. Computers are fast enough now to see and hear---and
these are the principal modalities which we use to interact with other people. We are
making our machines interact with people through these same modalities, using the
perceptual capabilities of people rather than forcing them to rely on their cognitive
abilities just to handle the interface. Cognitive capabilities should be reserved for the
real things that people want to do.
Now for cyberbodies and lifestreams. By making computation people centric it
should not matter whether I am in your office or mine, whether I pick up your PDA
or mine, whether I pick up your cell phone or mine. Wherever I am the system
should adapt to my identity whether I am carrying a "calling card" or not. It should
adapt to me, not to yet another technological decoration that I need to carry
around. And it should be automatic and secure as it does this. Just as people can tell
my identity through vision and sound so too can our machines. Furthermore, as
computation is cheap, much cheaper these days than special purpose circuitry (and
wherever that is not true yet, it will soon be), there is no need for artifacts to have
any particular identity. According to my needs at that instant, the machine in my
hand should be able to morph from being a PDA to a cell phone, to an MP3/Napster
player, just be changing the digital signal processing it is doing. Physics requires a
little bit in the way of an aerial, but beyond that demodulation, etc., can be in
software. And then the systems should handle bandwidth restrictions behind my
back, performing vertical hand-off between protocols as invisibly as today's cell
phones perform horizontal hand-off between cells.
Lifestreams are one sort of metaphor. We will not be subject to the tyranny of a
single metaphor as we are subject today to the desktop metaphor which Gelernter
so masterfully scorns. For a lot of my everyday work I will prefer a metaphor of a
personal assistant. I tell it something, and it takes care of the details, watching over
me and only interceding when it sees that I need help, pulling in all the necessary
information from wherever it is located, perhaps cached ahead of time in
anticipation of my needs. After working with me for many years my human personal
assistant knows so many details of my life and interactions that I can entrust her to
handle many of interactions with the world, without me ever providing any
supervision. I will want a similar relationship with my computation. Others might
prefer a geographical metaphor, zooming around through a virtual world, while a
few might like the lifestreams metaphor. Once a few of these metaphors get
invented and tried out, there will be a deluge of new metaphors as the young
hackers attack the interface problem with a vengeance.
universal tool for publication in physics and math. It is tightly and rigidly structured,
and that is what makes it so useful. It is an extremely good filing cabinet, so good
that it replaces many filing cabinets in thousands of offices all over the world.
I also don't like the metaphor of organizing my interface with the computer in terms
of the flow of real time. Another very good aspect of my computer is that it provides
the illusion that time can be frozen. I can work on several projects at once, and
each one is exactly where I left it when I go back to it. In the context of a very busy
life, full of travel and unexpected demands and developments, my computer
provides an oasis in which time advances in each window only when I pay attention
to it.
So I don't need a computer to enhance my imagination or associative memory. I
need a computer that counteracts the effects of my own too active imagination and
too busy schedule. Because of this I know that a computer that works the way my
powerbook does is something I will always need. And what makes my powerbook so
useful is the fact that it works so differently than I do. The fact that all the files have
names and locations in a hierarchical system is part of what makes it so useful.
When I want to find a paper I wrote three years ago on quantum geometry I want
to be able to pull up that file right away, not every file I wrote in the last five years
about some aspect of quantum geometry. Every once in a while I loose something
and it might be good to have a search machine that worked associatively. But not
very often.
I do agree with a lot of what David says. I can imagine lots of improvements on the
present Mac operating system. Some of the things he suggests would be very
useful. And of course the idea of a kind of cyber-agent who represents me in
cyberspace is intriguing and perhaps useful. But I have the sense that David's
manifesto is a bit like the predictions I read as a child that by the 21st century cars
would have evolved wings and we would all be flying to work. The technology of
cars has improved a bit since then, but the basic experience of driving is almost
exactly the same. Personally I don't cherish that experience so I prefer living in
places where one can get almost everywhere by public transportation. Here in
London at the beginning of the 21st century the only people who helicopter to work
regularly are a few wealthy businessmen and a few members of the royal family.
LEE SMOLIN is a theoretical physicist; professor of physics and member of the
Center for Gravitational Physics and Geometry at Pennsylvania State University;
author of The Life of the Cosmos.
Lifestreams vision, on David's whole package, but I think the experience of using it
will be extremely labor intensive, for me and for everybody.
And utterly worth all the trouble.
I must reject the final paragraph of the manifesto, which imagines an aspect of life
more meaningful than technology, which we will be free to pursue when we can
forget about technology. This reminds of Marx's vision of what should happen after
the revolution. He imagined we'd be reading the classics and practicing archery!
Idealists always believe there's some more meaningful, less dreary plane of
existence that can be found in this life. All we have to do is fix this hunking mess in
front of us and we'll get there.
A lovely belief to hold!
JARON LANIER , a computer scientist and musician, is a pioneer of virtual reality,
and founder and former CEO of VPL. He is currently the lead scientist for the
National Tele-Immersion Initiative
ubiquitous windows desktop is a classical example of "early lock in", like the Qwerty
keyboard and strange conventions for English spelling. These are both generally
acknowledged as unfortunate accidents of history. They are non-optimal, but not
quite bad enough to be worth changing. In fact, the standard computer interface in
incorporates both of these awful interfaces, yet interestingly, Gelernter does not
suggest changing them.
Are we at the point where the desk top computer interface will be thrown out and
replaced with something better? Is the computer desktop like the Roman alphabet,
which we have learned to live with in spite of its quirks, or is like the Roman system
of numerals, which we have pretty much abandoned? As much as I like the idea like
the idea of starting with a clean slate, I think it is more like the alphabet than the
numerals, and it is more likely that the desk top interface will be improved than
abandoned. Most of the specific improvements that Geletner suggests, like content
addressing, time-linking and multiple names, can be and are being incorporated into
standard interfaces. It won't be elegant, but it will work.
So does this mean that we are doomed to a millennium of Windows 2xxx? I doubt it.
As Scott McNealy is fond of pointing out, current PC operating systems are unwieldy
"hair balls" of accumulated history. Eventually, someone will start from scratch and
build something better. But I would be surprised if they start by throwing out the
part that most users are the most comfortable with, which is the metaphor physical
document handing. The replacement, when it emerges, will win by doing a better
job of the same thing.
Yet, there is also a second type of competition, which is not so much a replacement
as an addition. Computers are useful for more than handling documents, and other
interfaces will be developed for these other functions. These are interfaces more
likely to nurture the emergence of radical new ideas. If David Geletner really wants
to invent a new interface(and he would probably be good at it) he should forget
about looking for a better way to handle documents, and start think about a
computer that handles ideas.
W. DANIEL HILLIS, former vice president of research and development at The Walt
Disney Company, is the co-founder of a startup, Applied Minds. He is the author of
book, The Pattern On The Stone: The Simple Ideas That Make Computers Work.
Suppose I'm reading a message that I consider significant. Typing a single command
inserts a reference to the appropriate page in the message file at the end of a
special file of messages, puts in the time, and puts me where I can add an
identifying comment. The entry for the email with the manifesto is "Sat Jun 17
12:48:28 2000 /u/jmc/RMAIL.S00==1906 Gelernter Manifesto", giving the time, the
location of the message in the mail file and the name I gave the message.
If I later click on that line, I'll be reading the message again.
The purpose of messages having names of some sort is so that the receiver can
retrieve a message later. I doubt that such a name can be automatically generated
from the message itself, because the subject line, etc. are in the mental space of the
sender, not the receiver. The receiver has to somehow give the message a name if
he wants to be able to subsequently retrieve it in one step. In this case, I chose
"Gelernter Manifesto".
It took 12 minutes to write and debug the message naming facility in the Xemacs
editor. The MS-Word users I consulted told me that it would be very difficult to script
MS-word and Windows email systems to do it.
ii. We all find ourselves repeating essentially the same tasks in using computers.
Here's a slogan.
Anything a user can do himself, he should be able to make the computer do for him.
Fully realizing this slogan would be a big step, but even a little helps. It's called
letting the user "customize" his environment. Point i above is a small example.
Unfortunately, the making of computer systems and software is dominated by the
ideology of the omnipotent programmer (or web site designer) who knows how the
user (regarded as a child) should think and reduces the user's control to pointing
and clicking. This ideology has left even the most sophisticated users in a helpless
position compared to where they were 40 years ago in the late 1950s.
Scripting languages were a start in the direction of giving the user more power, but
the present ones aren't much good, and not even programmers use them much to
make their own lives simpler. Scripting is particularly awkward for point and click
use. Xemacs customization is reasonably convenient, but it isn't contiguous with
Xemacs Lisp, a really good programming language.
Linux is a step in the right direction of giving the user control in that the source of
the operating system is available to users, but I doubt that many users, change
Linux for purely personal convenience.
Back to Gelernter
iii. Most of the Manifesto's metaphors, e.g. "beer from burst barrels" and "scooped
out hole in the beach", aren't informative.
iv. In item 4, Gelernter offers
The Orwell law of the future: any new technology that CAN be tried WILL be. Like
Adam Smith's invisible hand (leading capitalist economies toward ever increasing
wealth), Orwell's Law is an empirical fact of life.
It isn't true, and I don't believe Orwell said it. In the preface to "1984", Orwell wrote
that "1984" is a cautionary tale that he didn't expect to happen. In particular,
"1984" has the tv that permitted Big Brother's minions to spy on the viewer. I don't
think Orwell expect that to be tried, and it hasn't been.
Indeed the reverse is true. Most possible new technologies are never tried.
v. Gelernter, like many other commentators, is glib about the system software and
its documentation being bad. Don Norman beat that drum, and Apple hired him to
make things better. He and they didn't have much success. A more careful analysis
of what causes difficulty and how to fix it is needed.
vi. The problem with file systems and any other tree structures is that tree
structures aren't memorable. Someone else's tree structure, e.g. a telephone
keypad tree, is often helpful the first time you use it, but it is a pain to go through
the tree again and again to reach a particular leaf.
vii. I couldn't figure out what Cybersphere was supposed to mean except that it's
grand, and I see that the other commentators didn't either. Computers haven't
changed people's lives to the extent that telephones, radio, automobiles and air
travel did early in the previous century. Paul Krugman is eloquent on this point in
the NY Times for 2000 June 18. Human level artificial intelligence would
revolutionize human life, but fewer people in AI are working in that direction than in
the 1970s. Erik Mueller documents one aspect of this neglect in his 1999 article
http://www.media.mit.edu/~mueller/papers/storyund.html.
viii. I think the idea of doing an Amazon search for a book on your own computer is
a bad one, because the computations are trivial, whereas the file accesses to the
Amazon database are substantial. To do it on your own computer would require
downloading the whole Amazon catalog before you started your search.
ix. Re item 21 thru 26, I don't think changing "desktop" to "information landscape"
would have made much difference. The problem of what you can do with a small
screen will remain as long as we have small screens. Two foot by 3 foot flat screens
with 200 bit per inch resolution will change computer use much more than another
factor of 100 in processor speed. We also need the bathtub screen, the beach screen
and the bed screen.
x. item 32. Directories reaching out for files is vague and suggests more AI than is
currently available.
xi. There's something in "streams of time", but it's vague. One thing that is feasible
is for an operating system to make a journal including all the user's key strokes and
mouse clicks and identifiable more substantial operations. The journal should be
available for the user to inspect, replay bits of, and to offer for expert inspection
when something has gone wrong.
xii. I don't understand to the objection to names; they were invented long before
computers. In item 37, Natasha and Fifth Avenue are names.
xiii. item 41. "To send email, you put a document on someone else's stream." That
suggests that the recipient would read it right away or at least at a time determined
by the sender. Present email sits till you get around to it, and that's better.
xiv. Paper will be needed until screens are better. I use paper just as Gelernter
suggests. Print the document for reading and then throw it away. I'll do that even at
the cost of losing the pretty red ink I've put on my printout of the Manifesto.
JOHN McCARTHY is Professor of Computer Science at Stanford University. A pioneer
in artificial intelligence, McCarthy invented LISP, the preeminent AI programming
language, and frst proposed general-purpose time sharing of computers.
environmental circumstances - usually at a single place and time. But given our
species' remarkable propensity for miming, such an invention would tend to spread
very quickly through the population - once it emerged."
MDH: This idea is unfortunately not new at all. Many people have argued for the
importance of imitation in human evolution, arguing that it has had cataclysmic
effects in all sorts of domains. Both Merlin Donald and Michael Tomasello make this
point quite eloquently, although they do not make any appeals to mirror neurons.
Point-9: "Thus I regard Rizzolati's discovery - and my purely speculative conjectures
on their key role in our evolution - as the most important unreported story of the
last decade."
MDH: I have no problem with the point that mirror neurons represent a key finding.
As noted above, I do have several problems with Rama's claims, both in terms of
their factual correctness, and their originality.
MARC D. HAUSER is an evolutionary psychologist, and a professor at Harvard
University where he is a fellow of the Mind, Brain, and Behavior Program. He is a
professor in the departments of Anthropology and Psychology, as well as the
Program in Neurosciences. He is the author of The Evolution of Communication, and
Wild Minds: What Animals Think.
Why fall in to the pitfall of equating intellectual capacity, creativity and so on with
brain size?"
MHW: Because we have very large brains and other primate species have much
smaller ones? Because the brain is the seat of the intellectual capacity and
creativity? Because no other credible explanation has been advanced for over 100
years?
I think much of the field has gone beyondthis, and certainly, Rama should be
familiar with Deacon's excellent points on the difficulty of disentangling selection on
brain size as opposed to body size. See the "Chihuahua fallacy".
MHW: Perhaps so, but the field has evidently not gone beyond missing the forest for
the trees.
Hauser Point-3: 4)"Did language appear completely out of the blue as suggested by
Chomsky? Or did it evolve from a more primitive gestural language that was already
in place?"
MHW: yes
Hauser Point: "Why is the distinction between language arising out of nothing, and
evolving from gestural systems? Why not explore the vocal communication of other
animals, as many of us have done. "
MHW: is there yet a credible link between these and human language? Much
evidence indicates that if human language has any links to primate communication
systems, they are to gestural and ant vocal communications. But this, of course,
comes from comparing living species to each other and not to ancestors.
Hauser Point: "Thus, given that no human culture has ever evolved a non vocal
language as its primary means of communication, it seems odd to think that our
language evolved from a gestural system. "
MHW: This makes no sense. "Evolved," of course, means changed, so how can an
evolutionary argument be held to the criterion of not changing?-Moreover, the best evidence to date on language-like forms of communication in
animals come from their vocalizations, not their gestural systems. See my two
books "The Evolution of Communication " and "Wild Minds".
MHW: Sure, but we did not evolve from "animals", but most directly from a common
ancestor with chimpanzees, which gives us a clue about where to look.
Hauser Point-4: 5) Humans are often called the "Machiavellian Primate" referring to
our ability to "read minds" in order to predict other peoples' behavior and outsmart
them. Why are apes and humans so good at reading other individuals' intentions?
MHW: What? Apes reading others' intentions? Not so at all. In fact, there is almost
no evidence that apes can read the intentions of others, except for a very recent
paper by Hare, Tomasello and Call (2000, "Animal Behaviour"). All of the studies to
date suggest that apes lack a theory of mind. See Tomasello and Call's Primate
Cognition and my Wild Minds.
Hauser Point-5: Do higher primates have a specialized brain center or module for
generating a "theory of other minds" as proposed by Nick Humphrey and Simon
Baron-Cohen?
MHW: Humphrey and Baron-Cohen are not responsible for the notion of theory of
mind. This goes back to David Premack and Dan Dennett.
Hauser Point-6: "The problem is that the human vocal apparatus is vastly more
sophisticated than that of any ape but without the correspondingly sophisticated
language areas in the brain the vocal equipment alone would be useless. So how did
these two mechanisms with so many sophisticated interlocking parts evolve in
tandem? Following Darwin's lead I suggest that our vocal equipment and our
remarkable ability to modulate voice evolved mainly for producing emotional calls
and musical sounds during courtship ("croonin a toon."). Once that evolved then the
brain - especially the left hemisphere - could evolve language."
MHW: and to think that when Frank Livingstone published a paper in 1962 entitled
"could australopithecines sing", it was met with peals of laughter.
MILFORD H. WOLPOFF is Professor of Anthropology and Adjunct Associate Research
Scientist, Museum of Anthropology at the University of Michigan. His work and
theories on a "multiregional" model of human development challenge the popular
"Eve" theory.He is the author (with Rachel Caspari) of Race and Human Evolution: A
Fatal Attraction
capacity ? This sounds like the tired old argument from anthropology and other
disciplines that the emergence of sophisticated tools, controlled fire, and so on
represents a kind of fossilized evidence of intelligence" If sophisticated tools, fire,
shelters, woven clothing etc are not evidence of intelligence, then what IS? Perhaps
Hauser would prefer that we went back in a time machine to visit early hominids to
administer " I Q tests" of the kind popularized by his former colleague - the late Dick
Herrenstein? Here I am in complete agreement with Wolpoff that cognitive
psychologists should start paying attention to the evidence from paleoanthropology.
2) Hauser asks: Monkeys have mirror neurons so why don't they have an elaborate
culture like us? Again if he had bothered to read the essay he will seen that I raise
the very same question twice in my article. Hauser's confusion stems from a failure
to distinguish necessary and sufficient conditions. I argue in my essay that the
mirror neuron system - and its subsequent elaboration in hominids- may have been
necessary but not sufficient . But it may have been a decisive step. Hauser appears
not to understand this idea. .
3) Theory of other minds.. Hauser categorically states that apes " do not have a
theory of other minds". He should read the elegant work of Povinelli. I would agree
with Hauser, though, that it would be nice to see clearer proof of the kind I am
accustomed to in my own field (visual psychophysics) But as I said above (2) even if
apes did not have a theory of other minds, this wouldn't vitiate my main argument.
Perhaps mirror neurons are necessary, but they may not be sufficient for generating
a theory of other minds.
4) Priority: Hauser says that the idea of a specialized mechanism in humans (and
perhaps apes) for reading other minds came from David Premack and Dan Dennett
not from Nick Humphrey or Simon Baron - Cohen. Hauser may be right about this- I
am not sure. Dennett is a sophisticated and original thinker and he may very well
have thought of it .The earliest Humphrey reference I can think of is 1977 at a
symposium I organized in Cambridge, UK (published) Can Hauser provide an earlier
Dennett reference? And I am aware of Premark's ingenious experiments but did he
explicitly state that there may be a specialized mechanism for reading other minds?
In any event my essay was an entry for a website chat room - not for a stuffy
journal like psych review. (If it had been the latter I would have been more diligent
with citations and issues of priority) There are dozens of others whom I could have
cited. (Including Hauser's own interesting work : perhaps he is peeved that I didn't
cite him) but that would have been beyond the scope of such a short essay.
5) Hauser argues that my my remarks about the important role of culture in
evolution are " not new". Again I wasn't pretending it was new .. of course it isnt
new, its been made a thousand times.(most recently and eloquently by Merlin
Donald) What's new is the link with a specific mechanism - mirror neurons (Or at
least, this point isn't widely appreciated.. and in that sense it satisfies the
requirements of John Brockman's original question " what's the single most
unreported story") 6) Hauser says " The evolutionary problem is even more
challenging. How do you go from a set of circuits in macaques that may guide motor
actions, and perceptions of them, to implementing such circuits in the service of
much more complicated cognitive acrobatics: imitation and mind reading?" Here,at
last, is a good point from Hauser and I would agree with him.. indeed its a point
that everyone, including Rizzolati - is perfectly aware of. But I would argue that
mirror neurons provide an experimental lever for addressing these issues empirically
instead of just speculating about how it might have happened.
7) Hauser argues "Finally, (Ramachandran's) argument that language somehow
emerged from emotional calls seems really quite impossible since the structure and
function of these calls have so few of the crucial properties of natural language: no
reference, no syntax, no decomposable discrete elements that can be recombined."
Here again Hauser has missed my point. I argued that it was initially the need for
modulating the voice for emotional calls (and perhaps singing) that exerted the
selection pressure for the development of sophisticated vocal apparatus (and neural
networks). But once these mechanisms for subtle voice modulations were in place
they provided a preadaptation - an opportunity - for language to evolve. Contrary to
Hauser's remark I certainly wasn't saying that "language evolved from emotional
calls." That would be ludicrous.
8) Hauser says "The vocal maneuvers of a bird or a bat are extremely complicated,
and we can't come close to imitating their sounds". Again Hauser confuses
necessary and sufficient conditions. The emergence of vocal sophistication may have
been necessary for language evolution (as I point out) but certainly not sufficient
(parrots don't have language!)
In summary, I suggest Hauser read my essay again and also read Wolpoff's
refutation of the many points he raises. But I thank him for his response, for it
raises many interesting and fascinating issues that need to be widely discussed.
Or perhaps we would all be better off following the advice given by the French
Anthropological Society in the 19th century and banning all ideas about the
evolution of language! (That's why I tried to emphasize culture in my essay rather
than language per se.)
V.S. RAMACHANDRAN, M.D., PH.D., is professor and director of the Center for Brain
and Cognition, University of California, San Diego, and is adjunct professor at the
Salk Institute for Biological Studies, La Jolla, California. He is the author (with
Sandra Blakeslee) of Phantoms in the Brain: Probing the Mysteries of the Human
Mind.
Dennett's ideas about higher order intentional systems were being developed,
independently, around the same time.
NICHOLAS HUMPHREY is a theoretical psychologist at the Centre for Philosophy of
Natural and Social Sciences, London School of Economics, and the author of
Consciousness Regained, The Inner Eye, A History of the Mind, and Leaps of Faith:
Science, Miracles, and the Search for Supernatural Consulation.