You are on page 1of 200

Nicholas Negroponte: WIRED Columns

Nicholas Negroponte

WIRED Columns
6.12 Beyond Digital 6.11 Pricing the Future 6.10 Being Anonymous 6.09 One-Room Rural Schools 6.08 Contraintuitive 6.07 The Future of Retail 6.06 Bandwidth Revisited 6.05 Taxing Taxes 6.04 RJ-11 6.03 Toys of Tomorrow 6.02 Poweless Computing 6.01 The Third Shall Be First 5.12 Nation.1 5.11 New Standards for Standards 5.10 On Digital Growth and Form 5.09 Reintermediated 5.08 Wireless Revisited 5.07 Digital Obesity 5.06 2B1 5.05 Tangible Bits 5.04 Dear PTT 5.03 Pay Whom Per What When, Part II 5.02 Pay Whom Per What When, Part I 5.01 Surfaces and Displays 4.12 Laptop Envy 4.11 Being Local December 1998 November October September August July June May April March February January December 1997 November October September August July June May April March February January December 1996 November

http://www.media.mit.edu/~nicholas/Wired/ (1 of 3) [28-4-2001 14:08:10]

Nicholas Negroponte: WIRED Columns

4.10 Electronic Word of Mouth 4.09 The Future of Phone Companies 4.08 Building Better Backchannels 4.07 Object-Oriented Television 4.06 Who Will the Next Billion Users Be? 4.05 Caught Browsing Again 4.04 Affective Computing 4.03 Pluralistic, Not Imperialistic 4.02 The Future of Books 4.01 Where Do New Ideas Come From? 3.12 Wearable Computing 3.11 Being Decimal 3.10 2020: The Fiber-Coax Legacy 3.09 Get a Life? 3.08 Bit by Bit, PCs Are Becoming TVS. Or Is It the Other Way Around? 3.07 Affordable Computing 3.06 Digital Videodiscs: Either Format Is Wrong 3.05 A Bill of Writes 3.04 The Balance of Trade of Ideas 3.03 000 000 111: Double Agents 3.02 Being Digital: A book (p) review 3.01 Bits and Atoms 2.12 Digital Expression 2.11 Digital Etiquette 2.10 Sensor Deprived 2.09 Why Europe is So Unwired 2.08 Prime Time Is My Time: The Blockbuster Myth 2.07 Learning by Doing: Don't Dissect the Frog, Build It 2.06 Less Is More: Interface Agents as Digital Butlers 2.05 Bit by Bit on Wall Street: Lucky Strikes Again 2.04 The Fax of Life: Playing a Bit Part 2.03 Talking with Computers
http://www.media.mit.edu/~nicholas/Wired/ (2 of 3) [28-4-2001 14:08:10]

October September August July June May April March February January December 1995 November October September August July June May April March February January December 1994 November October September August July June May April March

Nicholas Negroponte: WIRED Columns

2.02 Talking to Computers: Time for a New Perspective 2.01 Aliasing: The Blind Spot of the Computer Industry 1.06 Virtual Reality: Oxymoron or Pleonasm? 1.05 Repurposing the Material Girl 1.04 Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV

February January November 1993 October August June April January

1.03 Debunking Bandwidth: From Shop Talk to Small Talk 1.02 The Bit Police: Will the FCC Regulate Licenses to Radiate Bits? 1.01 HDTV: What's Wrong With this Picture?

[Back to Nicholas Negroponte's Home Page | Back to the Media Laboratory's Home Page]

http://www.media.mit.edu/~nicholas/Wired/ (3 of 3) [28-4-2001 14:08:10]

WIRED 6.12 - Beyond Digital

EGROPONTE

Beyond Digital S
ometimes defining the spirit of an age can be as simple as a single word. You may remember, for instance, the succinct (if somewhat cryptic) career advice given to young Benjamin Braddock, played by Dustin Hoffman, in the 1967 film The Graduate: "Plastics." "Exactly how do you mean?" asked Ben. "There's a great future in plastics," replied Mr. McGuire. "Think about it. Will you think about it?" Now that we're in that future, of course, plastics are no big deal. Is digital destined for the same banality? Certainly. Its literal form, the technology, is already beginning to be taken for granted, and its connotation will become tomorrow's commercial and cultural compost for new ideas. Like air and drinking water, being digital will be noticed only by its absence, not its presence. The decades ahead will be a period of comprehending biotech, mastering nature, and realizing extraterrestrial travel, with DNA computers, microrobots, and nanotechnologies the main characters on the technological stage. Computers as we know them today will a) be boring, and b) disappear into things that are first and foremost something else: smart nails, self-cleaning shirts, driverless cars, therapeutic Barbie dolls, intelligent doorknobs that let the Federal Express man in and Fido out, but not 10 other dogs back in. Computers will be a sweeping yet invisible part of our everyday lives: We'll live in them, wear them, even eat them. A computer a day will keep the doctor away.

The foothills of the future


And so? I know: Extrapolating bandwidth, processor speed, network dimensions, or the shrinking size of electromechanical devices has become truly tiresome. Moore's Law, first expounded by Gordon Moore in 1965, is indeed a stroke of brilliance, but one more mention of it should make you puke. Terabit access, petahertz processors, planetary networks, and disk drives on the heads of pins will be ... they'll just be. Face it - the Digital Revolution is over.

http://www.media.mit.edu/~nicholas/Wired/WIRED6-12.html (1 of 4) [28-4-2001 14:08:18]

WIRED 6.12 - Beyond Digital

Yes, we are now in a digital age, to whatever degree our culture, infrastructure, and economy (in that order) allow us. But the really surprising changes will be elsewhere, in our lifestyle and how we collectively manage ourselves on this planet. Consider the term "horseless carriage." Blindered by what came before them, the inventors of the automobile could not see the huge change it would have on how we work and play, how we build and use cities, or how we derive new business models and create new derivative businesses. It was hard, in other words, to imagine a concept such as no-fault insurance in the days of the horse and buggy. We have a similar blindness today, because we just cannot imagine a world in which our sense of identity and community truly cohabitates the real and virtual realms. We know that the higher we climb, the thinner the air, but we haven't experienced it - we're not even at digital base camp. Looking forward, I see five forces of change that come from the digital age and will affect the planet profoundly: 1) global imperatives, 2) size polarities, 3) redefined time, 4) egalitarian energy, and 5) meaningless territory.

Being global
As humans, we tend to be suspicious of those who do not look like us, dress like us, or act like us, because our immediate field of vision includes people more or less like us. In the future, communities formed by ideas will be as strong as those formed by the forces of physical proximity. Kids will not know the meaning of nationalism. Nations, as we know them today, will erode because they are neither big enough to be global nor small enough to be local. The evolutionary life of the nation-state will turn out to be far shorter than that of the pterodactyl. Local governance will abound. A united planet is certain, but when is not.

Being big and small


All things digital get bigger and smaller at the same time - most things in the middle fall out. We'll see a rise in huge corporations, airplanes, hotels, and newspaper chains in parallel with
http://www.media.mit.edu/~nicholas/Wired/WIRED6-12.html (2 of 4) [28-4-2001 14:08:18]

WIRED 6.12 - Beyond Digital

growth in mom-and-pop companies, private planes, homespun inns, and newsletters written about interests most of us did not even know humans have. The only value in being big in any corporate sense will be the ability to lose billions of dollars before making them.

Being prime
Prime time will be my time. We'll all live very asynchronous lives, in far less lockstep obedience to each other. Any store that is not open 24 hours will be noncompetitive. The idea that we collectively rush off to watch a television program at 9:00 p.m. will be nothing less than goofy. It will make sense only for sporting events and election results - and that is only because people are betting. The true luxury in life is to not set an alarm clock and to stay in pajamas as long as you like. From this follows a complete renaissance of rural living. In the distant future, the need for cities will disappear.

Being equal
The caste system is an artifact of the world of atoms. Even dogs seem to know that on the Net. Childhood and old age will be redefined. Children will become more active players, learning by doing and teaching, not just being seen and not heard. Retirement will disappear as a concept, and productive lives will be increased by all measures, most important those of self. Your achievements and contributions will come from their own value.

Being unterritorial
Sovereignty is about land. A lot of killing goes on for reasons that do not make sense in a world where landlords will be far less important than webmasters. We'll be drawing our lines in cyberspace, not in the sand. Already today, belonging to a digital culture binds people more strongly than the territorial adhesives of geography - if all parties are truly digital. Ask yourself about the basics, about water, air, and fire. Remember the game 20 Questions? You begin by giving a hint as to whether you are thinking of an animal, a vegetable, or a mineral. OK. I am thinking of none of them. I am thinking of 100111100010110001. Next: After six years of writing the back page, I have decided it is time to pass this prime real estate on to someone else, before I find myself on the wrong side of the Wired/Tired equation. I won't be gone too far and will appear at times in this and other parts of the magazine. Promise. [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-12.html (3 of 4) [28-4-2001 14:08:18]

WIRED 6.12 - Beyond Digital

[Previous] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.12, December 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-12.html (4 of 4) [28-4-2001 14:08:18]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html

EGROPONTE

Pricing the Future T


ime is money, and in more sense than one. At the current rate of inflation, a 1998 dollar will be worth only 19 cents in 2050. Yet the same dollar, invested conservatively today, will grow to more than US$20 in the same period. There's an obvious benefit to thinking long term. But too often we seem not to care. Like boards of directors who don't look further than the next quarterly report or politicians who can't see past the next election, we have a hard time ignoring today's immediate distractions. Besides, each of us is only a short blip in the long trajectory of history. It is easier to let people living in the distant future worry about it. Let's dismiss that excuse once and for all, especially for thought leaders inside governments and NGOs, who must look beyond their individual terms in office and, in some cases, beyond their individual lifetimes. The same is true for anyone who wants to change the future economic wellbeing of his or her country, region, or the world at large.

Flatland
Young people, I happen to believe, are the world's most precious natural resource. They may also be the most practical means of effecting long-term change: Making even small opportunities for children today will make the world a much better place tomorrow. Frankly, I have almost given up on adults, who seem generally to have screwed things up despite the good work being done in many parts of the globe. So I am increasingly inclined to seek out ways for the 6- to 12year-olds of our planet to learn how to learn, globally as well as locally. Education, however, is the formal jurisdiction of national, state, and local bureaucracies. And trying to bring about change through various ministries, departments, or boards of education tends to be a highly politicized and, at
http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html (1 of 3) [28-4-2001 14:08:34]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html

best, slow-moving process. Some ex officio move is needed, something outside the official fabric of school that can do for learning what the Internet has done for communicating. Fortunately, there is a small, very basic step we can take today that will have a huge, lasting effect on tomorrow: Price local telephone calls at a flat rate. Though most people in the United States already enjoy flat rates, the same is not true, alas, in most of Europe or the developing world. Flat rates for local calls are universally employed in only 13 percent of the countries around the globe. Sure, nothing is ever simple, but this one is real close. Unfettered access to the Net is key to the future of education. And learning, whether it's face-to-face or at great distance, takes time. Yet metered, by-the-minute pricing fosters short-term thinking in the most elementary sense. Instead of encouraging children to explore, parents nervously watch the clock as soon as their kids log on. The incentive is to have your child spend less time learning, not more - something unimaginable with a book or a library. Ironically, the high cost associated with time spent on the Net is not from Internet access itself, which is generally flat rate, but from the local telephone bill. Metered billing has come about from, among other things, the historical limitations of circuitswitched voice networks. Telecommunications in most of the world has traditionally been a public utility, owned and operated by the government; people therefore assumed civil servants were providing the least expensive and most beneficial service. The benefits of increased telecompetition, of course, have now become clear. And as the pendulum continues to swing toward privatization around the world, national phone companies must dress up for the party. Yet in anticipation of being privatized, some telcos have raised local rates. And even in markets where new economic models have emerged with the growth of packet switching, some are arguing to price data on a per-packet basis. This is crazy - and exactly the wrong way to go for Internet users, who want and need low and fixed local rates. Mind you, I am not saying free or even unreasonably low. Fixed. Note to telcos: Take into consideration the cost of metered billing that you will now save by offering fixed rates. And give discounts for a second line. A lot of children will be better off for it.

TV may have it right


Though there is not much good that can be said for the vast majority of television programming, the pricing model may be right. A large chunk of worldwide broadcasting is advertiser supported, which makes it free to the user. Another piece is provided via monthly fee or yearly tax. And a third piece is paid per view. As telcos see themselves getting more and more into the content business, this makes far more sense for their future, too. Some 15 years ago, I jokingly suggested that the cost of cellular telephone calls should be supported by advertising. To my great surprise, this jest seems to be fast approaching reality.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html (2 of 3) [28-4-2001 14:08:34]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html

For the price of a few TV-like commercial breaks, people can now make free calls - local or long distance, wireline or wireless - thanks to the Swedish company GratisTel; Seattle's Network 3H3 offers similar ad-supported service.

Should digital access be a fundamental human right?


"Extreme" though it may be, the idea of digital access as a human right has been bantered about by some very staid organizations. Of course, even the most widely accepted human rights are the subject of enormous, often inconsistent cultural debate. Many people in the United States, for example, both support the death penalty and oppose the right to have an abortion. Now consider freedom of information. The Muslim world, for one, is not so anxious to see universal access to the World Wide Web. Given the amount of trash collecting out there, this is not so surprising. Since we cannot even agree on the right to life, should we presume to say access to the Net is a human right? You bet. The only thing we know about the future is that it will be inhabited by our children. Its quality, in other words, is directly proportional to world education. While this can be improved by institutions and governments (see September's column, "One-Room Rural Schools"), the most rapid change will come from the personal resolve of millions of individual children. It will come from being passionate about the world, its people, and their knowledge. Unless we price access to the Net far more fairly than we do today, the dream will not become reality. Get with it, telcos: Get rid of those meters. Time is running out. Next: Beyond Digital [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.11, November 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html (3 of 3) [28-4-2001 14:08:34]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html

EGROPONTE

One-Room Rural Schools W


ill the information-rich get richer and the information-poor get poorer? Will the divide shrink, or expand? The question might also be phrased in terms of the education-rich and the education-poor. The latter category includes some 200 million children who do not complete their primary education. Still, the state of the world in terms of access to digital technologies can be viewed as half-full or half-empty. Optimists (like me) take solace from the vast numbers of grassroots efforts on behalf of children by educational activists who, against all odds, are dotting the planet with experiments in computer- and Net-based learning. Pessimists find doom and gloom in the odds themselves, which are aggravated by economic forces paradoxically heading in the wrong direction.

Telecom paradox
The nations with the worst and most expensive

telecommunications today are precisely those that will pay the highest price in terms of development. In any given developing country, improving the quality and extent of new telecom infrastructure is perhaps the easiest problem to fix. The economics on the demand side are much harder, in large part because usurious billing schemes are imposed by local rgimes, whose leaders look upon telecom as a luxury to be taxed. Local calls in Africa, for instance, average US$2 per hour; phoning from one country to the next costs $1.25 per minute. But consider that many of these state-owned telcos are in nations that receive much of their income in hard currency - earned from such steep prices, among other things. This shortsighted approach, however, must change in favor of the long-term economic
http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html (1 of 3) [28-4-2001 14:08:41]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html

view.

Computer paradox
Computers keep getting faster, following Moore's frequently quoted law of doubling processing power every 18 months. Played backward, the law should read: At a constant speed, the cost of computers will be cut in half every year and a half. Manufacturing, of course, does not scale smoothly in reverse. But the potential for very low cost computers is wildly more than we have made of it. Why? Because inexpensive computing is a crummy business. The margins are too low and the economic model is that of a commodity, a prospect that frightens American business. US companies just do not know how to tackle the low end. And by "low end" I don't mean the much vaunted sub-$1,000 computer - I mean PCs that cost less than $100.

Education paradox
The most troublesome paradox - and the most difficult to change - is that of education itself. Developing countries look longingly at developed nations, with an eye toward copying their education systems. The sad truth, however, is that the Western notion of school stems from an industrial age in which the intellect of children is manufactured like Fords: Instruction is a serial, repetitive process driven by strict norms of curriculum and age. As my MIT colleagues Marvin Minsky and Seymour Papert are fond of pointing out, such schools are an extreme form of age segregation. Six-year olds study with 6-year-olds, until next year, when they study with 7-year-olds. Only schoolchildren with siblings get the real advantages of age integration. Mind you, this isn't just younger children learning from older ones - little brothers and sisters helping their older siblings with computers has become a hallmark of our day. Age integration is a fundamental change we need to consider as part of revisiting the concept of school.

The little red schoolhouse


One-room schools are often believed to be a sad consequence of poverty. But instead of a problem, they may be a solution. These schools, which may make up as many as half the number of primary schools on the planet, are driven by the principle that young children should learn as close to home as possible. The result is an educational environment that is small, local, personal, and ageintegrated and that potentially provides a much richer learning experience than larger schools in urban environments. My advice to political leaders in developing nations: Adopt an educational strategy that focuses digital technology on primary education, particularly in the poorest and most rural areas. The

http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html (2 of 3) [28-4-2001 14:08:41]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html

goal is not to boost national standards or to stem the population flow into urban areas, though these may be by-products. The mission is to learn a lot more about learning itself. In the process we may find new models of education that can be used in all parts of the world - rich and poor, urban and rural. The catch is access.

LEOpolitical learning
Low Earth orbit satellites, or LEOs, are the wave of the future. The first such system, Iridium, will be put into service in September with 66 satellites serving the world as a single telecommunications system. Think of it as a cellular telephone grid - but one where you are stationary and the grid moves. Iridium, conceived in the late '80s, is optimized for voice, not data, but in a few years it will be followed by a next generation of LEOs (Teledesic being the most celebrated) optimized for the Net. When that happens, suddenly, being rural does not matter. Being in the most remote part of the planet does not matter. In fact, such places are precisely where LEOs will not otherwise be saturated with urban traffic. By contrast, when you physically wire the world, remote places become the most expensive to serve. With LEOs, you have to cover the whole world in order for any single part of it to work - rural and remote access, in a sense, comes for free. In the next five years, LEOs will thus change the balance of access. With very low cost computers and some boldness in education policy, it will be possible to touch the lives of all children, including those in the poorest and most remote regions of the world. The right step to take now is to use whatever means necessary to reach as many one-room rural schools as possible - to learn today about learning tomorrow. These apparently forgotten schools, paradoxically, may provide the best clues for real change in education. The ideas above are in large part taken from the real plans of the 2B1 Foundation (www.2b1.org/), in cooperation with the Fundacin Omar Dengo in Costa Rica. Costa Rica is one of the few nations to seriously embrace computers in primary education; one-room rural schools make up 40 percent of the country's primary schools, serving nearly a tenth of the K-6 population. Next: Being Anonymous [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.09, September 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html (3 of 3) [28-4-2001 14:08:41]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html

EGROPONTE

Contraintuitive
I finally switched to Wintel. And what angers me most about the move is that Apple forced me into it. One compelling reason for the change: With the Macintosh's shrinking market share, no start-up or single entrepreneur can feel confident basing his or her work on the Mac; even kids shy away from it, preferring what many children insist are more "serious machines." Another reason, well emphasized by the press, is the low and decreasing number of old third-party developers, and the utter absence of new ones. Yet another reason: Mac software and peripherals are far too few in number; even worse, when they do appear, it's much later than the competition. People concerned about tomorrow just cannot settle for the tools of yesterday. Finally, and unfortunately for Apple, its last-ditch efforts to leap back on the leading edge - the G3-based Power Mac and PowerBook; the iMac, "the Internet-age 'computer for the rest of us'" - are far too late, if not too little. "Pro, Go, Whoa"? No. I already think different. So, sadly, I switch away from a system that I used for almost 15 years, at least three hours a day, seven days a week, without ever once in all those years having read or even opened a manual. The nightmare begins.

Windows as a snowboard
Learning to snowboard is considerably harder if you know how to ski; in fact, the first day requires enormous humility from the otherwise seasoned skier. But after two or three days, your balance overcomes the "unnatural" counterintuitive moves you must make. By contrast, after six months I am still falling all over the slopes of Windows, in total disbelief at the collective complexity and unbelievable inconsistencies introduced by all parties involved. This is an indictment of not just Microsoft, but the entire community of software and hardware developers who have done such a bad job of making usable and explainable systems - so much so that I'm convinced, in dark moments, that some of this is purposeful. A slightly more charitable view follows.

http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html (1 of 3) [28-4-2001 14:08:48]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html

Windows as ugly reality


First impressions die hard. In fact,the initial shock of seeing something completely strange and new sometimes outlasts the relationship. In the world of humans, the cause can be something as minor as crooked teeth or as major as missing limbs. It can also be self-imposed body modification, which might include hair color, nose rings, or tattoos. In any case, the impact wears off eventually. The better and better you get to know the person, the more you see through the malformation of first impression. Ultimately, the unsightliness disappears almost totally in favor of the person's mind and personality. So far, that has not happened with my new friend Windows.

Windows as a city
Driving in a city for the first time, you are completely dependent on road signs. And far too often the most important one is behind a fully blossomed tree, is unlit at night, has changed names without notice, or uses nomenclature that is understandable only if you know the city. If you are a resident, of course, you never notice these inconsistencies, because you don't use signs to navigate. You already know where you are, where you're going, and how to get there. Though some cities try to use universally recognized, "intuitive" road signs, the city of Windows certainly needs to be much more friendly to nonresidents. System designers take note: It is time to test-drive your grandmothers.

Past the age of innocence


As a professor and lab director, my job includes forgiving all sorts of defects and omissions in favor of encouraging the positive elements of new ideas and their imaginative demonstration. Frequently the very innocence of the application design is part of its beauty. Nonetheless, the sad truth is that a great demo at the Media Lab is often refined to death, thanks to the natural human instinct of falling in love with your creation, refusing to let go, and wanting to make it better and better, la Pygmalion. Almost without exception, however, the most recent release of any software product is slightly worse than what it's replacing. This is true on the Mac as well. Take a deep breath and repeat after me: Leave it alone. Well intentioned ameliorations have turned elegant solutions into bloated programs, the "upgrade" often being that you can do almost the same thing in five or more different ways, with inconsistent results. The user is second guessed so much nowadays that a simple typo can set your computer into disastrous motion. Each time I try to position my
http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html (2 of 3) [28-4-2001 14:08:48]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html

cursor in Word, I enter into an argument about what I mean - it is so clever! Yet something as basic (to Mac users) as click versus doubleclick is not handled consistently. Puh leazzze. On the other hand, surely it is possible for software designers, in and outside Microsoft, to be consistent about such simple tasks as exiting, quitting, or closing a program - three words should not be license for three ways of doing the same thing. Also, of particular annoyance is the complexity of establishing a modem connection while on the road - huge effort is required to outsmart the smart dialer, which is so stupid as to assume you will dial long distance.

But it is purposeful, dummy


Thus contraintuitive, not counterintuitive. Every software company has a Department of Guaranteed Revisions, real or fictitious, assigned the all important mission of securing future sales. This doesn't mean updates and bug fixes - which ought to, but don't necessarily, come free. It means new features. Therein lies the problem. Looked at one by one, these new features may have some merit to some people. But as they grow in number, a simple boxwood hedge starts looking like a jungle of poison ivy. In an effort to hack through it all, I installed AltaVista to search my disk drive. Alas, it cannot open most of the files, and it sends me through four painful hunt-and-clicks to finally choose one out of 53 programs to open the file. This army of "viewers" is presented through a tiny window that allows you to see no more than seven items at once, as you scroll, in my case, to WordPad, the bottom of the list. At least it works half the time. Still, if there is anything that a computer should know better than me, it is how to open a file. I guess that is the next release. Next: Rural One-Room Schools [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.08, August 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html (3 of 3) [28-4-2001 14:08:48]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html

The Message: 60 Future Date: 07.1.98 From: <nicholas@media.mit.edu> of To: <lr@wired.com> Retail Subject:
EGROPONTE You enter a store. You see something you like. You write down the product name and manufacturer. You go home and order it over the Internet. As a result, you didn't have to carry it, you probably got a better price, and you may have avoided sales tax. The store in this scenario is merely a showroom. Have I just described the exception to tomorrow's retail, or the rule?

Beyond indoor advertising


Already today, going to a bookstore may be the worst method of buying a particular book. All the elements are against you: weather, time, energy, price, not to mention availability. Instead, by logging on to, say, Amazon.com (my favorite), you can order the book in less time than it would take to call and see if your local bookseller has it in stock. Bookstores, of course, are no longer just for buying books. They are for browsing, meeting people, having coffee, and engaging in the serendipity of life - bumping into the unexpected. The real "product" is not mere paper and ink, but a place to conduct educational and social entertainment. This logic clearly extends to many aspects of shopping for kitchenware, clothing, specialty foods - more or less anything for which there is a niche or mainstream channel for direct marketing or catalog sales. Retailers beware: You must offer all sorts of value beyond the literal merchandise. This goes for Wal-Mart as well - being big will not save you.

Rightfully chicken
For the most part, manufacturers of toys, cars, clothes, et cetera, seem less than eager to advocate that you disintermediate the middleman and instead buy directly from them. Though that would be more profitable for the producer and less expensive for the consumer, it would also alienate the single largest outlet for toys, cars, clothes, et cetera - the retailer. Still, consumers will inevitably provide the pressure for change. They will band together to buy cars as a fleet and at fleet prices. They'll organize by church group to buy Barbies directly from Mattel. In the digital world, consumers hold almost all the power, which is a nice change. What consumers don't do, entrepreneurs will, with megastores, auctions, and swap meets - all in cyberspace. And they will do so without paying any rent to anybody.

Thought for food


The most challenging and challenged form of retail is the food supermarket (there are, of course, several wellhttp://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html (1 of 3) [28-4-2001 14:08:54]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html

known and successful cyber services in the US and UK; check out, for one, www.groceries-online.com/). A number of things make grocery shopping so challenging. The next time you leave a supermarket, just take a look at your shopping cart and imagine those items coming to your home one by one. It would be both a traffic jam and a logistical nightmare, not to mention the clamor of the doorbell constantly ringing. At the same time, home delivery of all sorts of things is far from a historical oddity. When I was young, my mother would call the grocery store and say what she wanted. It would be delivered in minutes. So what is new? What has changed is that a great many of the staples you buy at the supermarket are now available elsewhere. And in the digital world, you may find considerable advantage in buying some of those staples directly from the manufacturer. This applies equally well to Pampers and Pabst Blue Ribbon.

Midnight express
The catch is, you're never home. More important, you are least likely to be home when packages are most likely to be delivered - that is, daytime. Among other things, we need to rethink the concept of a mailbox, originally conceived for letters, themselves a dying breed (other than bills). The mailbox of tomorrow ought to be a cubic yard, with the potential for refrigeration. Various schemes might further protect goods from the errant courier and provide receipts as needed. In terms of delivery, the empty streets of nighttime can be used to transport all the things that people buy over the Net. That is, after all, how your newspaper is delivered. And there is no reason for the morning news not to be accompanied by fresh bagels - media companies should note the opportunity to cook up a cobranded product called "The Daily Bread."

Global cottage industries


While retail is indeed at risk of being disintermediated, the products may start coming from places other than giant manufacturers or distributors. I expect most people to avoid home-brewed alternatives to brand-name laundry detergent, medical products like estrogen, or children's car seats. On the other hand, most of us would probably prefer homemade jam, bread, and soup, not to mention family-made wines and olive oils. The concept of merchant and consumer will change. We'll see a lot more peer-to peer buying and selling. Even the most earthbound occupations will be affected. Today, garden supply Web sites are selling direct to consumers with the help of express delivery (see, for instance, www.garden.com/). Of course, the ultimate form of green-thumb disintermediation may be for a cow to poop into a preaddressed FedEx bag. But that still may be pushing the envelope.

The shopping experience


What will finally save retail is the shopping experience itself. This will certainly include architecturally interesting settings with every salesperson a Cindy Crawford, a theater- or museum-like experience that makes you feel special. On the other hand, it might mean a bargain basement of sale items whose prices are hard to believe and even harder to find, a game of hunting and gathering, where buying is like catching a fish. Or it could just be a place people want to be, to see and be seen, to compensate for the virtual and OD on the real - to buy something, maybe, or maybe not. Another kind of retail, however, is truly about to end - the type where you can't park, the checkout lines are interminable, the staff is disagreeable, and the product has always run out. Owners of such operations should be
http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html (2 of 3) [28-4-2001 14:08:54]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html

advised: The digerati don't need you any longer. And very soon everybody will be digital. Next: Contraintuitive [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.07, July 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html (3 of 3) [28-4-2001 14:08:54]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-06.html

NEGROPONTE

Message: 59 Date: 06.1.98 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Bandwidth Revisited

Even after several generations of being digital, the basics of bandwidth still cause all kinds of confusion: Where does it come from? How much do you need? What does it cost? It's hard enough to understand without the detours (sorry) caused by likening bandwidth to a highway - the onramps, the offramps, not to mention the roadblocks and tollgates. Rest assured that adding an ISDN or cable modem "fast lane" for your home PC does not solve all the problems. A more appropriate, if less concrete, likeness might be paranoia, because real and perceived bandwidth are widely separated, and you cannot tell how much of the problem is your own fault. The real frustration comes from the inability to speed up the process by taking any one or, for that matter, any number of measures. The World Wide Wait, as it is too often known, is a chain of many events mostly outside your control, the slowest link of which determines the verve of your connection. Worse, the slowpoke in the connectivity chain is hard or impossible to identify. Imagine waiting for a bus, not knowing how many people are in line, where you stand in that line, when the bus is coming, or how big it will be when it does. (Woops - just skidded into another roadway metaphor.) One of the best ways to deal with bandwidth is to understand it on its own terms - which is not easy.

Not as advertised
Bandwidth is the capacity to deliver bits, typically measured by how many you can transfer in one second. Newcomers often don't know the difference between bits and bytes - there are 8 bits in a byte, which happens to be enough to represent a single ASCII character, including standard Latin alphanumeric characters, punctuation, and most accents. Without going into the brutal details, suffice it to say that you would like any string of 1s and 0s you transmit to be the same as those that are received. This cannot be blindly guaranteed without spending some of that same bandwidth to deliver extra bits for the sake of checking, correcting, or, in the worst case, requesting that bits be resent. When it comes to bandwidth, you're not getting all you think you are - a bit like the coverage of an insurance policy. But that's OK, if you listen to the press, because a lot more bandwidth is coming, though nobody is sure exactly when. The 16 million-plus miles of optical fiber found in the US alone will soon have the capacity to carry 400 billion bits per second, thanks to recent technology from Lucent (AT&T's former hardware house). The telex, by comparison, operated at 75 bits per second.

All in good time


While 400 billion bits per second sure sounds fast, think of the following: Each CD you own contains roughly 5 billion bits. Imagine a warehouse full of CDs loaded onto an 747 and flown from New York to Washington, DC, in an hour. Guess how many bits per second that is. Those "Boeing bits," of course, are not accessible in real time. And so? Do you really need all that bandwidth? Do you need it continuously, instantly, versus in a few moments? I often liken bandwidth to a restaurant, where slow service may not matter if you are in good company. Understanding bandwidth needs is further complicated because bit requirements are all over the map. Words are thin and

http://www.media.mit.edu/~nicholas/Wired/WIRED6-06.html (1 of 2) [28-4-2001 14:09:06]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-06.html

video is fat: You read (the Latin alphabet) at about 600 bits per second and you watch television at about 3 million bits per second. A picture is not a thousand words, but more like a million.

All bits aren't created equal


After countless hours (and untold billions of bits) of media coverage of the first O. J. Simpson trial, the verdict was rendered with only one bit: guilty or not guilty. I'm sure O. J. values that bit more that any other in his life. If you have a pacemaker, which sends a few bits each hour to the hospital, there is no doubt that you value those bits more than any of the trillions in Titanic. Not all bits are created equal. But who determines their value, not to mention their priority? Right now, on the Web, you are more or less subject to the whim of any site. If the site has sold an advertising banner that greets visitors right at the frontdoor, you may have to swim through hundreds of thousands of winking and dancing bits before you get the measly 250 words (an old measure indeed) you were looking for, which alone would have taken about two seconds on your friendly 28.8K modem.

Clogged pipe
I remember graduating from a 110-bits-per-second modem to 300 bps and feeling the astonishment of speed. Later, 1,200 bps felt like a miracle and 9,600 was lightning. But thereafter, for me, it came to a thud, though I continued to get more bandwidth there was even a time when I had more Internet bandwidth coming into my home than the entire nation of Switzerland. That fat pipe now feels empty simply because other forms of congestion get in the way. Communications software has become obese - the comfort of the saddle has separated us from the horse. Intercountry links are slow, often purposely so, because the self-interest of governments or telcos (often the same) are not well served otherwise. Servers are swamped, because bandwidth is also an issue inside a computer, as in the speed at which a processor can talk to memory.

Bandwidth as clean air


The big issue in the near future will be how to charge for bandwidth, if at all. The world of atoms would have us think that large and heavy packages traveling halfway around the world should somehow cost more than a tiny, featherweight object crossing the street. But in the world of bits, that is not necessarily so. I am willing to pay a great deal more, for example, for a few pacemaker bits getting to my local hospital than for receiving CNN in Kathmandu. At first glance, decoupling bandwidth from value is a nightmare for the telecommunications industry. After all, it costs real money to build the wired and wireless infrastructure needed to wrap the world in unlimited bandwidth. For this reason alone, it will be a while before bandwidth is priced like clean air. Until then, we will all be using bandwidth like scuba tanks. Web designers take note: It pays to be parsimonious. Next: The Future of Retail [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.06, June 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-06.html (2 of 2) [28-4-2001 14:09:06]

WIRED 6.05 - Taxing Taxes

NEGROPONTE

Message: 58 Date: 05.1.98 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Taxing Taxes

After discovering the basic principle of electromagnetic induction in 1831, Michael Faraday was asked by a skeptical politician what good might come of electricity. "Sir, I do not know what it is good for," Faraday replied. "But of one thing I am quite certain - someday you will tax it." Little did he know how right he was, though more than a century would pass before the word bits existed. The idea of taking a tax bite out of digital communications comes courtesy The Club of Rome, specifically Arthur Cordell and Ran Ide's 1994 report "The New Wealth of Nations." More recently, redistributing the benefits of the information society has been championed by influential economist Luc Soete, director of the Maastricht Economic Research Institute on Innovation and Technology. Despite their repute, supporters of such a bit tax are clearly clueless about the workings of the digital world.

Tax bytes
A typical book contains about 10 million bits, which might take even a fast reader several hours to digest. By contrast, typical video - digital and compressed - burns through 10 million bits to produce less than four seconds of enjoyment. A bit consumption tax, in other words, makes no more sense than tariffing toys by the number of atoms. Maybe the information highway metaphors have gone to the heads of digitally homeless economists, who think they can assess value by something akin to counting cars. Of course, collecting taxes can be tough enough without trying to assess something you can't see, especially when you don't know where it is going to or coming from. This helps explain why the Clinton administration in late February reaffirmed its commitment to making cyberspace a global freetrade zone. The policy's purpose, the brainchild of White House senior adviser Ira Magaziner, is both economic stimulus and practicable fairness. So whether or not Congress has kept its promise to vote on the related Internet Tax Freedom Act by early spring, the legislation has the full force of careful deliberation - and historical inevitability - behind it. For these and other reasons, Europe abandoned the bit tax. But the idea still survived three and a half years of consideration, despite the growing awareness that bits by their very nature defy taxation.

The locus pocus of sales tax


Even so, the principled position taken by Clinton and Congress comes, in part, because making the
http://www.media.mit.edu/~nicholas/Wired/WIRED6-05.html (1 of 3) [28-4-2001 14:09:13]

WIRED 6.05 - Taxing Taxes

Net a free-trade zone works for the US federal government. The Treasury derives most of its revenues from personal and corporate income taxes. If the economy sees a boost from any form of free trade, the Feds will see a proportionate rise in their own intake. Simple arithmetic. However, many countries and most states don't work that way. Instead, a sales tax is the means often the principal means - of filling government coffers. Ohio governor George Voinovich, chair of the National Governors' Association, declared that the Internet Tax Freedom Act "represents the most significant challenge to state sovereignty that we've witnessed over the last 10 years." Both he and the act may be right. The sales tax is also particularly popular among bureaucrats in developing nations, where collecting income tax is even harder because the poor make so little and the rich can avoid so much. Plus, the sales tax turns retailers into a nationwide web of tax collectors. And the tax is "fair" because it's based on what you spend versus what you earn. Still, Voinovich and company would be smart to start looking elsewhere, because their receipts will plummet as we buy more and more online, especially if what we buy are bits.

The VAT vat


While the sales tax is fairly commonplace, the value-added tax is more or less unknown in the United States. Loosely speaking, it taxes the various stages of transforming raw material into a finished product, the last stage of value added being what you pay at the retail counter (and get back at the airport's VAT-refund counter). This kind of tax makes even less sense in the world of bits. Assume that bits are my stock in trade and I use Microsoft Word to refine my raw material: Should I pay a VAT for spellchecking each story? Should I pay a VAT to have it encrypted and another to have it decrypted, not to mention on each of the layers of value added by various editors? In fact, as a cheerful taxpayer, if I have to pay taxes on bits - at least those that make up words - I would be willing to pay a higher VAT for the fewest possible bits: just the right ones, please. That would be value added indeed.

Jurisdiction in jeopardy
But the most taxing aspect of cyberspace is not the ephemeral nature of bits, the marginal cost of zero to make more of them, or that there is no need for warehouses to store them. It is our inability to say accurately where they are. If my server is in the British West Indies, are those the laws that apply to, say, my banking? The EU has implied that the answer is yes, while the US remains silent on the matter. What happens if I log in from San Antonio, sell some of my bits to a person in France, and accept digital cash from Germany, which I deposit in Japan? Today, the government of Texas believes I should be paying state taxes, as the transaction would take place (at the start) over wires crossing its jurisdiction. Yikes. As we see, the mind-set of taxes is rooted in concepts like atoms and place.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-05.html (2 of 3) [28-4-2001 14:09:13]

WIRED 6.05 - Taxing Taxes

With both of those more or less missing, the basics of taxation will have to change. Taxes in the digital world do not neatly follow the analog laws of physics, which so conveniently require real energy, to move real things, over real borders, taxable at each stage along the way. Of course, even analog taxation without representation is no tea party.

Getting physical
Looking ahead, taxes will eventually become a voluntary process, with the possible exception of real estate - the one physical thing that does not move easily and has computable value. The US has a jump-start on the practice, in that 65 percent of local school funds come from real estate taxes - a practice Europeans consider odd and ill advised. But wait until that's all there is left to tax, when the rest of the things we buy and sell come from everywhere, anywhere, and nowhere. Next: Bandwidth Revisited [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.05, May 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-05.html (3 of 3) [28-4-2001 14:09:13]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html

EGROPONTE

Message: 57 Date: 04.1.98 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

RJ11

In telecommunications parlance, the "last mile" is endlessly debated in terms of wired versus wireless, symmetry versus asymmetry, and bandwidth needs - real, perceived, or actually used. This story is about the "last centimeter" - its bad design, unreliability, and public absence when you really need it. Think of it: the lowest common denominator in being digital is not your operating system, modem, or model of computer. It's a tiny piece of plastic, designed decades ago by Bell Labs' Charles Krumreich, Edwin Hardesty, and company, who thought they were making an inconspicuous plug for a few telephone handsets. Not in their wildest dreams was Registered Jack 11 - a modular connector more commonly know as the RJ-11 - meant to be plugged and unplugged so many times, by so many people, for so many reasons, all over the world.

Not a jack of all trades


How many RJ-11 clips have you broken? I am astonished that something that probably costs less than a penny separates me from the Net so often. It seems I'm constantly carrying a cord with a broken RJ-11 connector at one end or the other. Mind you, this is caused not just by normal wear and tear, but by a design that causes the small plastic clip on the male connector to catch on various articles when you pull the cord out of a briefcase. The half-life of an RJ-11 plug on the road must be less than a month. Ironically, some new RJ-11 female connectors add insult to injury - they are spring-loaded for better contact, which renders a clipless male plug useless. At least in the past you could pop it in and hold your breath. Nonetheless, the RJ-11 has become a world standard. More than a billion RJ-11s have been manufactured to date; as it happens, the connector is considerably more common in fax
http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html (1 of 3) [28-4-2001 14:09:26]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html

machines than handsets in most countries. In any case, there is little likelihood that this physical standard will be replaced by anything other than wireless connections - the usability and reliability of which is a whole separate story. Suffice it to say that most people will be plugging in for a long time. Since we'll have to live with the RJ-11 for a while, we can surely make it easier to use than it is now.

Dongling participles
Dongle supposedly comes from the verb to dangle. If you do not have one, consider yourself lucky. I travel with four. A dongle is a hardware key and cable assembly that attaches to an external port; one of mine takes the otherwise solid female part of an RJ-11 and introduces flimsiness and delicacy to map the thin profile of a PCMCIA card - what a really dumb name - into the roughly square form factor of the RJ-11. My advice to anybody planning to purchase a laptop: don't buy one that does not have a built-in RJ-11. If you do, you are simply adding another point of weakness in your connectivity and will in all likelihood find yourself with the wrong dongle just when you need it.

Airport dilemma
One reason to join airline clubs is to have access to RJ-11s - and, often, free local phone service. This is fine for those who can afford a membership, and if the airport you happen to be in at a given time has a club with RJ-11 jacks. Otherwise, you are too often captive to a national public phone system that seems not to have heard of data communications. With the exception of a rare AT&T pay phone, which looks like a pregnant Sega game, your only hope is an acoustic coupler. But this is yet another thing to carry - and it's not particularly reliable at that. Surely we can build more pay phones with RJ-11 jacks. In fact, an RJ-11 only pay phone would not need a keypad, credit card reader, or coin slot; your PC would send the number and billing data. This would be the least expensive "phone booth" ever made.

Hotel malice
In some countries, especially those in western Europe, phones are still hardwired into the wall. In others, phones might use any one of nearly 200 phone jacks. Still, more and more places are accommodating or switching to the RJ-11 in the wall, in the phone, or as an auxiliary jack in the handset - the latter being the most appropriate in a hotel room. Some hotels still don't have such auxiliary jacks in the handsets, offering the lesser convenience of the RJ-11 in the wall. But because hotel managers also have learned that constant use breaks the clip, many cut it off, making the plug a onetime "permanent" connection, never to come out again. That is inexcusable. Even the most benign digerati will use anything from a penknife to a corkscrew to reopen the jack, the effect of which is well

http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html (2 of 3) [28-4-2001 14:09:26]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html

deserved but devastating. Get with it, hotels. I was thrilled to see that the latest Zagat hotel guide includes a ranking of computer friendliness. About time.

Getting it straight
Yet even if you are lucky enough to get a room with an easily removable, seemingly usable RJ11 jack, don't be surprised if it does not work - i.e., there's no dial tone. Though the plug itself has become fashionable, in some cases the wiring is not consistent, especially in small telephone exchanges. The RJ-11 module has up to six wire conductors, but a simple phone connection needs only two. And while most of the world agrees on which two to use, just enough places (usually hotels, alas) don't. This is one of those exasperating instances in which we cannot even agree which way is up. As best I can tell, it's a 50/50 bet as to whether you will find the clip on the top or the bottom sometimes it is even set sideways. (The problem isn't just with technicians installing hotel wiring: two models of PowerBook had it one up and one down.) While this may seem to be nitpicky, the problem is - literally - more than meets the eye. Because RJ-11 sockets are often sufficiently recessed that you cannot easily see the jack's orientation, you have to use trial and error - and error does the plug no good. So the next time you travel, the next time you connect, think about this critical little piece of plastic. Don't you wish someone would make an unbreakable connector, even one priced as high as US$100? Maybe it is time for designers across the board to agree that RJ-11 clips should go on the top. There's no real reason to prefer the top to the bottom, but if we all did it one way, over time we might just get it straight. Next: Taxing Taxes [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.04, April 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html (3 of 3) [28-4-2001 14:09:26]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html

EGROPONTE

Message: 56 Date: 03.1.98 From: <nicholas@media.mit.edu> Toys of To: <lr@wired.com> Tomorrow Subject:
Why would Professor Michael Hawley swallow a computer? Because he plays. He plays the piano. He plays hockey. He plays with ideas. In fact, he plays with notions like running the Boston Marathon with a radio transmitter pill inside his stomach, from which his core body temperature measurements would be broadcast to any and all media willing to listen (ttt.www.media.mit.edu/pia/marathonman/). The wild, the absurd, the seemingly crazy: this kind of thinking is where new ideas come from. In corporate parlance it's called "thinking out of the box." At the MIT Media Lab, it's business as usual. The people capable of such playful thought carry forward their childish qualities and childhood dreams, applying them in areas where most of us get stuck, victims of our adult seriousness. Staying a child isn't easy. But a continuous stream of new toys helps.

"You get paid for this?"


Many people accuse the MIT Media Lab of being a giant playpen. Well, they're right. It is a digital wonderland overflowing with outrageous toys: all imaginable sorts of computers and interface paraphernalia. Play, however, is a pretty serious business in the hands of students and professors like Hawley - it's 24 hours a day, seven days a week. And some profound results, both scholarly and commercial, come out of this play. Of course, a few naysayers forget that the world has a lot more money than good ideas. Such behavior, the killjoys insist, is something companies cannot afford, in terms of either money or image. Thus the duty of academic institutions to be, among other things, more playful. This sounds simple, but is so true: When people play, they have their best ideas, and they make their best connections with their best friends. In playing a game, the learning and exercise come for free. Playing produces some of the most special times and most valuable lessons in life. Still, many teachers and parents consider the classroom and the playground to
http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html (1 of 3) [28-4-2001 14:09:32]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html

be worlds apart. But are they? When a young child plays with a toy, the interaction can be magic. Toys unlock that magic part in the toy and part in the child's head. Toys are the medium and the catalyst of play. Recognizing the power of play, Hawley and company are fundamentally rethinking toys, exploring the convergence of digital technology and the toys of tomorrow - another case where bits and atoms meet. Computers have changed almost all forms of work. And, since play is the work of children, it is time to revisit the tools of their trade.

TNT: toy networking technologies


The Internet is largely composed of desktop computers, assembled like the world's biggest pile of Tinkertoys. These days, many people talk of extending the network beyond desks and into all sorts of appliances, large and small. There is no question that appliances like refrigerators or doorknobs should be networked. But what might happen if toys were networked, too? If each Mickey Mouse and Barbie had an IP address, their population would exceed that of a small, wellconnected country. Every year, 75 percent of all toys are new, meaning newly designed that year. The toy industry lives and dies on invention. Toys gush into homes every Christmas and Hanukkah, every birthday, and lots of other days besides. This tremendous churn rate means that toys are well matched to the pace of change in the digital world. You can and should put some form of computing in a refrigerator, but a new fridge enters the house only once every 20 years. With their far faster turnover, toys may be the fastest moving and fastest evolving vehicles on the infobahn. Toys of tomorrow will be networked. Today, they rarely intercommunicate. There is no MIDI for toys, no Internet link. Once tomorrow's powerful networks, simulators, and synthesizers are commonly interconnected through toys, a next generation of exquisite musical toys - a wonderful idea to begin with - will emerge. A toy piano that sounds like a Steinway. A baby rattle that conducts a symphony. Blocks that build a melody. Shoes that carry a tune (think karaoke for your feet). Every toy a link in a worldwide toy box. And every toy must be inexpensive. Today's typical toy costs about US$20, which means it wholesales for $14, and must be built for about $5. Forget the $1,000 computer or the $200 settop box - invent a $5 computer that doesn't look or act like a computer. That's a grand challenge for the digital industries: melt a Cray down into a Crayola.

The real toy story


Today, a conservative computer industry still seems determined to push laptops into the hands of fat-fingered 50-year-olds, with "Net PCs" just an infrared click away from tomorrow's couch potatoes. Surely we can do more than that. But how?
http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html (2 of 3) [28-4-2001 14:09:32]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html

Hawley and others at MIT have been making new friends around the world to help invent toys. Their new business partners these days include Lego, Disney, Mattel, Hasbro, Bandai, Toys "R" Us, and others. Their other playmates are computer, communications, and entertainment companies like Intel, Motorola, Deutsche Telekom, Nickelodeon, and, believe it or not, the International Olympic Committee. Never before have the world's leading toymakers, technology companies, and sports organizations collaborated in such a way - which is just terrific, because the new world of digital toys won't be invented by any one group. Nobody is quite sure what will turn up on this new road to invention. The program just started. Stay tuned. But one thing is clear: Toys of tomorrow will carry some of the most awesome and inspiring technology humankind has yet created and place it in the hands of children. Where it belongs. Think of it this way. Being "wired" does not mean becoming "computer literate" any more than driving an automobile requires becoming "combustion literate." The power of toys is that they reach back to and shape the earliest years in our lives. One day, our grandchildren will naturally assume that teddy bears tell great stories, baseballs know where they are, and toy cars drive themselves with inertial guidance. Lucky them. Next: RJ-11 [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.03, March 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html (3 of 3) [28-4-2001 14:09:32]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html

EGROPONTE

Message: 55 Date: 02.1.98 From: <nicholas@media.mit.edu> Powerless To: <lr@wired.com> Computing Subject:
Until last year, my biggest PC power problem was the inconvenience of carrying around four to six spare batteries. At least, I told myself, it's good exercise. Two very different circumstances made me rethink powering computers. The first was a story in the February 28, 1997, issue of The Wall Street Journal. PCs, the paper reported, are mostly used like "potted plants," yet according to the Sierra Club, this wasted resource can account for 200 pounds of carbon-dioxide pollution every year - about 2 percent of what's emitted by a car that is "actually doing something." Turns out the story was just plain wrong. A desktop computer running continuously requires less than half a percent of the energy used to power a car (ditto its carbon production). And a laptop can reduce energy consumption to less than 10 percent of what's used by a typical microcomputer. My second encounter came in July, at a gathering hosted by the Media Lab and the 2B1 Foundation. Participants from 45 developing nations spent six days sharing ideas and experiences about introducing, against all odds, computers to Third World education. At first glance, the odds seemed stacked against them three to one: the high cost of computers; the low availability of connectivity (affordable or otherwise); and the arrested development of educational theory, practice, and politics. Another challenge, however, proved even more basic: power. In the poorest countries, some schools and most homes don't have any. In fact, more than one-third of the world's population is without electricity. One of the 2B1 participants, Peter Patrao from India, offered a simple solution: bicycles. Vigorously riding a bike generates about 100 watts. The image conjured by Patrao's classroom is certainly a cute one - think of half the class pedaling while the rest work on PCs, redefining, among other things, "recess."

http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html (1 of 3) [28-4-2001 14:09:39]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html

Additional solutions to powering PCs ranged from car batteries to more imaginative ways of harnessing the wind and the sun. Then, in August, inventor Trevor Baylis, following his work on windup radios, reported success with a windup computer - clear progress toward curbing the PC's high powered appetite.

Power diet
A laptop's power supply gets eaten up by the display, the disk drive, and the circuitry, in that order. The display takes the biggest bite, typically 25-40 percent (and rising, as processors go to lower voltages). For a variety of technical reasons, backlit displays have so far provided the best contrast ratio and highest brightness. The power problem is that most of the light is lost in transmission: typically less than 10 percent gets through the flat panel. The rest is dissipated as heat. Still, an LCD uses five to ten times less power than a CRT. A reflective display, by contrast, uses almost no power, taking most of what it needs from ambient light. This is why most calculators and all wristwatches require only tiny power supplies. So far, nobody has achieved an active-display medium that can reflect light with sufficient contrast. Actually, one reflect-ive display does a pretty good job - paper. In fact, "digital ink" has made significant progress in labs. (See Wired 5.05, page 162.) Considerable power is also consumed by a disk drive, which is why drives typically spin down then start up as needed. As it happens, there are all sorts of other reasons to get rid of moving parts, a direction the industry is already pursuing. The rest of a laptop's power consumption comes from circuitry, which can be made very powerefficient with modest trade-offs in performance. With the exception of the display, then, industry trends do not necessarily fly in the face of lowpower computing. In fact, Intel and Toshiba have massive programs under way in so-called flash memory, which uses no power to hold onto data, a little to read it, and a little more to write it. In short, making a very power-efficient computer - one that uses only a bit more than your wristwatch - is not quite as pie in the sky as you imagine. Some of the issue is just fire in the belly.

Nabisco and IBM


What the cookie company and the computer company have in common, other than a great CEO, is an interest in calories. Could this possibly be translated into a PC powered with chocolate chips? Said differently, can the energy in your body be used to power a computer? Many wristwatches, of course, do just that. Take, for example, a recent line from Swatch rather than rely on an old-fashioned mechanical spring, they use body motion to charge a
http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html (2 of 3) [28-4-2001 14:09:39]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html

battery, which then drives an electronic timepiece. These would have been perfect for notorious British publisher Bob Maxwell, who once told me that the last time he did any exercise was when he wore a watch that needed winding. Thad Starner, an MIT PhD student, has studied human-powered computing in some detail and with considerable self-interest - he has worn his computer for more than five years. Though the human neck can generate considerable heat, Thad has concluded, locomotion is the best source of power. He estimates that 5-8 watts can be recovered from walking. A great deal of body energy, in other words, is simply dissipated, like waves crashing onto a beach. Recovering just a bit of it could be quite important for "effectively" powerless computing.

Combustion?
At the turn of the century, steam engines provided about 80 percent of the total capacity for driving machinery. Today, most office equipment uses electricity, which in the US alone accounts for a $2.1 billion energy bill, not counting air-conditioning. That's wildly out of scale with the developing world, especially the poorest countries in, say, Africa where per capita power consumption is 5 percent of ours. The answer may be combustion. People are making serious progress in putting a fuel-burning, microelectromechanical engine on a chip. Butane, for example, has very high energy density. With an onboard generator as a means for generating electricity, you might one day simply fill your laptop with gas if you're tired of pedaling. Next: Toys of Tomorrow [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.02, February 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html (3 of 3) [28-4-2001 14:09:39]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-01.html

THE

THIRD SHALL BE FIRST


THE NET leverages latecomers in the developing world.
By Nicholas Negroponte At a distinguished meeting of Internet founders in 1994, I suggested that the Net would have a billion users by 2000. Vint Cerf laughed in my face. Others rolled their eyes at what seemed vintage Negroponte hyperbole. Of course, no one expected the Internet to take off the way it has. In fact, those who knew the most about the Net were the most conservative. They knew just how hard it would be to create the technical infrastructure, to invent the appropriate business models, and to proliferate the necessary number of computers. At that same meeting, Cerf estimated 2000 would see 300 million users - a milestone we'll approach in the Americas alone at current rates of growth. Today, however, people still shake their heads at the number 1 billion. They see no way that the growth witnessed in recent years can be sustained for the next two, let alone five. They are forgetting the ROW - the rest of the world. Developing developing nations In the comfort of being digital, we forget the enormous leverage a single Net connection provides to, say, a rural primary school in one of the hundred poorest nations. In these places, there are no libraries and almost no books; the schoolhouse is sometimes a tree. To suddenly have access to the world's libraries - even at 4,800 bits per second - is a change of such magnitude that there is no way to understand it from the privileged position of the developed world. But the ROW understands. World leaders realize that the most precious natural resource of any country is its children, and that the digital world is key to education. For this reason, development is starting not only to include but to mean telecommunications. The (old) World Bank lent money to develop dams, roads, and factories, but rarely bits. The (new) World Bank, by contrast, is deeply committed to education, and I bet a great deal of the
http://www.media.mit.edu/~nicholas/Wired/WIRED6-01.html (1 of 2) [28-4-2001 14:09:47]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-01.html

organization's energy will be directed into telecommunications infrastructure - not for phones, but for access to the Internet. This is where most of the billion users will come from by December 31, 2000. Celestial coincidence The planets of change seem to be lining up. Take the privatization of telephone companies. Competition (not to mention technology) has proven that costs will plummet: sooner or later, every civilized place will have a low and fixed rate for unlimited local calls. This will completely change how children use the Net. Yet, telephone rates are the most expensive precisely where they should be the cheapest - in the developing world. It is time to take celestial intervention quite literally. A combination of geostationary and low-Earth orbiting satellites - GEOs and LEOs - can and will change Internet usage in the ROW, especially for the more than 2.5 billion people who live in poor, rural areas. GEOs are interesting because many of the orbital slots over places like Africa are underused, unused, or, frankly, wasted on broadcast systems. A 1 meter dish, of course, could make all the difference for a remote developing world school. That's now within reach, thanks to companies like Tachyon, which will soon sell a turnkey satellite link for US$2,700 - a price that promises to drop to $1,500 by the end of 1999. In the long run, LEOs are even more interesting. The first LEO, Motorola's Iridium, will start service before the end of '98; its 66 satellites will circle the planet and, at least initially, be underutilized in developing countries. It is not hard to imagine the same satellite that was designed for an affluent, roaming cell-phone user being used by a poverty-stricken, stationary child - for bits. In the past five years, developed nations have jockeyed for position in the digital world. Finland and Sweden are well in the lead in Europe, while their neighbors, France and Germany, have fallen increasingly far behind. In other words, the "Third World" five years from now may not be where you think it is. There have been many theories of leapfrog development, none of which has yet survived the test of time. That's about to change.

Nicholas Negroponte (nicholas@media.mit.edu), founder and director of the MIT Media Lab, is senior columnist for Wired. [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.01, January 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-01.html (2 of 2) [28-4-2001 14:09:47]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html

NEGROPONTE

Message: 54 Date: 12.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Nation.1

In February 1995, the European Commission hosted a G7 roundtable on the information society. Envoys ranging from heads of state to prominent industrialists debated the Global Information Infrastructure. The Japanese delegation included, among others, Isao Okawa, chair of Sega. His participation was quiet, but his return to Japan was not. He was determined to correct what he saw as a glaring omission: the people most affected by the coming information society - that is, children - were utterly unrepresented. He decided to change this. Within eight months, Okawa conceived, funded, and implemented the first Junior Summit. For four days in Tokyo that October and November, 41 children from 12 nations convened for a milestone meeting at which adults found room only in the audience. The young people, 12 to 18 years old, addressed issues ranging from the environment and peace to communications; some participants were involved in using the Internet to compose music collaboratively and perform it live for the first time. The event was a resounding success. Now, some two years later, Okawa is determined to see another such assembly take place in a broader international setting. To this end, MIT has been asked to host the second Junior Summit in 1998, under the direction of Media Lab professor Justine Cassell.

Agents of change
The second Junior Summit presents a chance to increase the number of countries at the table, to give the conference participants more time for discussion, and to let them disseminate their conclusions more widely. Children from every country in the world are invited to discuss the future of young people in the digital age. Of course, linking children around the world will not in itself solve the problems of world hunger, poverty, and repression. However, children together may make a step toward solving these problems and others that we adults are not child enough to recognize. The simple act of uniting children will widen their perspectives on their own lives, and the lives of those who come after. It will deepen their understanding of their own problems, and the problems of those who are unlike them. It will lead to a better world, as children become
http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html (1 of 3) [28-4-2001 14:09:53]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html

empowered to seek solutions globally and implement them locally. For the adults around these children, this process of discovery can enlighten all efforts to make the information society everybody's society. The second Junior Summit seeks to engage 200 children between the ages of 10 and 16 from around the world. Participants will be selected based on how well they can document - in their native language, through a video or photographic essay, through a piece of music, or through drawing or painting - the state of children in their community, with particular focus on how the digital revolution is affecting them. Those children who do not yet have anything to document with respect to the digital revolution are asked to give their vision of a global community. The 200 selected children will meet online for six months of debates, discussions, and the creation of artistic works. Simply participating in the online forum will allow children to be agents of change in their communities - all of those who are chosen will be given computers and Internet connections, which will be set up in their local schools or community centers. After six months online, the participants will choose 60 delegates to represent them at the summit at MIT, where they will solidify their positions and, finally, present their arguments to world leaders. Following the summit, children will be matched with mentors from industry, government, and education who will help them launch local action projects to share the benefits and continue the momentum of the summit.

A better world
One topic on the table will be a proposal by five alumni of the first Junior Summit to start Nation.1 - a virtual nation for children, with its own voice, flag, and currency, but without borders or centralized government. This nation would apply for membership to the UN and make every effort to include children from developing nations. Here is an excerpt from Nation.1's first proclamation: As a kid growing up with computers, you have ideas, you see possibilities, but they don't count, you're just a kid. Adults need kids, they just don't realize it. They can't relate to what kids have to offer, because they don't understand technology the way kids do. Kids have valuable perspectives, but the world offers no mechanism to voice their opinions. They have no representation in world politics and they have no influence in the decisions that govern their future. So with the help of the second Junior Summit, a group of young, very wired individuals is going to bend, twist, and distort some barriers with the hope those barriers will come undone. We are going to create a country in cyberspace, not defined by geography or race, but by technology and age: Nation.1 - a country populated and run by kids. Nation.1 is just beginning, and we are considering how to create digital political systems, how
http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html (2 of 3) [28-4-2001 14:09:53]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html

to deal with language barriers, how the technology behind the country will work. We passionately believe it's worth it, because uniting kids changes their perspective, widens their understanding, and leads to a better world. Proposals like Nation.1 may seem outrageous, even unthinkable, compared with what we adults would have suggested. That's the way it should be: ultimately, the world must go past what adults believe will succeed. The global information society is ours only to dream - it will be up to these children to live it out. If you are 10 to 16 and interested in the Junior Summit, check out www.jrsummit.net/, or write to Junior Summit, MIT Media Lab, Cambridge, MA 02139. For further information on Nation.1, email nation1@2b1.org or visit www.2b1.org/nation1/. This column was cowritten with Justine Cassell (justine@media.mit.edu), professor in the Learning and Common Sense section of the MIT Media Lab and director of the Gesture and Narrative Language Group. Next Issue: Powerless Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.12, December 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html (3 of 3) [28-4-2001 14:09:53]

WIRED 5.11 - New Standards for Standards

NEGROPONTE

Message: 53 Date: 11.1.97 From: <nicholas@media.mit.edu> New To: <lr@wired.com> Subject:

Standards for Standards

I used to think that anybody who worried about standards was boring (perhaps because they were). Now I seem to be one of them. One of the biggest problems any traveler has with laptop computing, especially in Europe, is the plugs. Europeans have more than 20 different formats, with the only semblance of a standard coming from that offered to power an electric razor. There is actually a committee addressing the so-called Europlug - some estimates range upward of a quarter century before such a standard can be implemented, if found, and then only at huge cost. Atoms require enormous effort. Agreeing on physical form is very hard. This is not limited to the manufacturing specifications for metal and plastic machinery. The first two months of the 1968 Vietnam peace negotiations in Paris were devoted to determining the shape of the table.

Television finally is about bits


For a long time, innovation in television was more like the design of tables than the design of bitstreams. In 1991, for example, Jae Lim, a senior professor at MIT, announced to the world that "we finally agree on one thing: pictures of the future will have an aspect ratio of 16:9." Mind you, this silly thought was progressive thinking at the time. Most people wanted to commit their great-grandchildren to a specific number of scan lines and a fixed frame rate as well. Part of the reason this parochial view existed was that almost nobody under 40 seemed to care, while the average age of television engineers was over 50. Zenith, which had both TV and computer divisions, was one of the few companies in a position to make a difference, until a new CEO sold half the company - and, sadly, sold the wrong one. Generally speaking, it is fair to say that advanced television backed into being digital (sorry) for reasons of digital data compression - to more efficiently use expensive satellite transponders and digital error correction - to more effectively use a decaying cable plant. While those are not necessarily wrong reasons, they are not the right ones, either. The right ones have to do with all the assets of new content that come from the digital world, not the least of these being a
http://www.media.mit.edu/~nicholas/Wired/WIRED5-11.html (1 of 3) [28-4-2001 14:09:55]

WIRED 5.11 - New Standards for Standards

new facility for standards. Bits are easier than plastic.

Modems do it right
Plugs don't handshake. Modems do. This process is not too different from dogs sniffing each other. Modems try their best to communicate at the fastest possible speed, using whatever common error correction they share. Today, some of this is controlled in software; tomorrow all of it can and will be. The reason this works is simple: people have agreed on headers and metadescriptions. In other words, the standard is about how you will describe yourself, not what you are. This is important not only for massive globalization, but also for upgrading and future change. TV of the future - at least in the US, thanks to the Federal Communications Commission - will be flexible. In spite of the broadcast industry, the FCC refused to set anything but transmission standards. The result will be a slow blend of the Web (as kids know it) and TV (as baby boomers knew it). How a signal arrives, by land or by air; where it comes from, near or far; and what it looks like, a postage stamp or HDTV - all will be described in the signal, not decided by folks in Geneva or Washington, DC.

Higher standards
What the standards bodies need to do is turn their attention to some of the larger issues: while God may be in the details, a great deal needs to be said about the broad brush. The reason to make global standards is global communications. This means people communicating with people. And people have the biggest standards problem of all - they often don't speak the same language. If a Martian were to turn an ear toward our planet, conversations around the world would sound like modems unable to communicate with each other. In the face of today's digital globalization, it would be hard to explain the thousand-plus written languages and the scores of spoken dialects. On the other hand, people constantly question the digital dominance of English. Yet, as I like to remind them, we are glad that a French pilot lands an Airbus at Charles de Gaulle airport speaking English to the tower, as it means that other planes in the vicinity can understand. English as a second language, with or without computers, has become an international protocol of sorts and an accepted means of traffic control - even ship to shore. In the same way, English will continue to be the air traffic control language of the Net 10 years from now. But it will stop being the dominant carrier of content - English will be replaced by Chinese. Still, all sorts of other languages will flourish as well. I remember once defending small cultures and native tongues in these pages (see "Pluralistic, Not Imperialistic," Wired 4.03, page 216), only to be told by a reader that I got it all wrong. The issue, he said, was not
http://www.media.mit.edu/~nicholas/Wired/WIRED5-11.html (2 of 3) [28-4-2001 14:09:55]

WIRED 5.11 - New Standards for Standards

English versus language X, but English versus ASCII. Boy, was he right. The ASCII standard is a huge problem, not the least of it being the insufficient number of bits for kanji characters or calligraphic fonts. In fact, without taking much note of this limitation, we have cemented ASCII into place in a far more entrenched fashion than English. We had better learn a lesson - and quickly. That lesson, however, is not to invent another Esperanto, but to realize that our bitstreams will be in different languages, which need some standard headers. Making the Net multilingual-ready is even more important than setting the metastandards for our modems and TVs. International bodies must recognize that a higher level of communications standard is needed to make sure that all languages are equally accommodated and self-descriptive. The 5 billion people not using the Net today have a lot to say. Kids know that. Next Issue: Jr. Summit [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.11, November 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-11.html (3 of 3) [28-4-2001 14:09:55]

WIRED 4.03 - Pluralistic, Not Imperialistic

NEGROPONTE

Message: 33 Date: 3.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Pluralistic, Not Imperialistic

During the 1982 World Conference on Cultural Policies in Mexico City, Jack Lang, the French minister of culture, declared that "cultural and artistic creation is a victim of a system of multinational financial domination against which we must organize ourselves." Then he called the object of his tirade the product of "financial and intellectual imperialism." What prompted this outburst? The TV soap Dallas. That foolish program was so popular worldwide, it became a symbol of American cultural imperialism and a threat to European identity. More than 13 years later, last December, French President Jacques Chirac warned leaders of the world's 47 French-speaking nations that if English continues to dominate the information highway, "our future generations will be economically and culturally marginalized." Chirac declared that 90 percent of information transmitted on the Net is in English, and it threatens to steamroll French language and culture. Hello? Mr. Chirac, if anything is going to restore cultural identities, large and small, it is the Internet. I won't ask you what your forefathers did to the native language of Benin, the African nation where you made this proclamation. But I will remind you that the World Wide Web was invented in Switzerland, in the French-speaking part no less, and your own Minitel system is twice the size of America Online. The idea that the Net is another form of Americanization and a threat to local culture is absurd. Such conviction completely misses and misunderstands the extraordinary cultural opportunities of the digital world.

Italy may point the way for France


In Cagliari, the capital of Sardinia, lies evidence that the Net can be respectful of language, at once local and global, and a medium for cross-cultural exchange. Nichi Grauso's Video On Line (http://www.vol.it) provides a browser used in more than 20 languages, many of which you have never heard: Afrikaans, Amarico, Ewe, Fa, Haoussa, Ibo, Kimbundu, Nyanja, Pulaar, Sangho, Suto, Tigrigna, Chokwe, Yoruba, Bassa, Indi, Kikongo, Lingala, Lunda, Mandekan, Fulani, Somali, Wolof, Tswana, Swahili. Think of it. Less than two years ago, Nichi Grauso had not heard of the Internet (see "The
http://www.media.mit.edu/~nicholas/Wired/WIRED4-03.html (1 of 3) [28-4-2001 14:09:57]

WIRED 4.03 - Pluralistic, Not Imperialistic

Berlusconi of the Net," Wired 4.01, page 78). When he discovered it, instead of grumbling about Netscape being in English, he created a multilingual browser and service already used or accessed by more than 500,000 people around the world. Video On Line validates the decentralist structure of the Net, especially in the European context, in which governments own the highly centralist telephone companies that dominate the continent's telecommunications with poor service and high costs. Colonialism is the fruit of centralist thinking. It does not exist in a decentralized world.

Revisiting the ducks


I am fond of quoting MIT professor Mitch Resnick's story about ducks flying south in a V formation. The front duck is not leading. Each duck is a stand-alone processor who behaves according to local rules and autonomous behavior. My variation of this story is that if you shoot the front duck, it will drop and the rest will scatter. Eventually, the remaining ducks will regroup into a new V formation with a new front duck and continue on their way. No, the vice president duck did not become the president duck. That's not the way it works. The essence of the Net, like the ducks, is a collection of interconnected and autonomous processors, none of which is in control and all of which can be a client one moment and a server another. In this scheme, it is not possible to colonialize the Net and turn its users into English-speaking puppets in the way France turned 46 other regions or nations - each with its own indigenous languages - into French-speaking colonies. There are three reasons the Net will be free from such imperialism: 1. The cost of entry is low. With less than US$2,000 of capital equipment and $10 per month of recurring costs, you can publish on the Net, say, in Romansch. Under these conditions, it is probably unimportant that only 70,000 people in the southeast corner of Switzerland speak this language. 2. It can deliver to a sparsely populated universe, like Urdu-speaking brain surgeons around the world, even though there may be only two or fewer per city. Information and community can be pinpointed with total disregard for geographic density and without the need to justify or qualify them in the terms of a mass medium. 3. The Web has turned the "medium" inside out because the process enables you to "pull" information - versus having it "pushed" at you. Language-specific content can be accessed more easily than ever either by you or by digital agents; not to mention that in the near future, these same agents will be able to automatically translate documents into your own language.

English as a second language


While English is not the most widely spoken language in the world, it is definitely the most-used second language. A German in Greece will order her food in English, just as a Frenchman in
http://www.media.mit.edu/~nicholas/Wired/WIRED4-03.html (2 of 3) [28-4-2001 14:09:57]

WIRED 4.03 - Pluralistic, Not Imperialistic

Germany will speak with a taxi driver in English. Similarly, the air traffic control standard is almost always English. This lingua franca should not be confused with cultural identity, nor should it be the basis for cultural-wars rhetoric. In fact, thank god we have the means to share an operational language. It is not the language of love, good food, and fine wine - it is certainly not the language of Voltaire - but it is a utilitarian language that lands planes safely and keeps the Net's infrastructure running. The Net is not produced and bottled in the US. In fact, more than 50 percent of Net users are outside the US, and that percentage is rising. By 2000, less than 20 percent of all Internet users will be in the US. So please, Mr. Chirac, stop confusing chauvinism with imperialism. The Net is humankind's best chance to respect and nurture the most obscure languages and cultures of the world. Your flaming is counterproductive to making our planet more pluralistic. Next Issue: Affective Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.03 March 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-03.html (3 of 3) [28-4-2001 14:09:57]

WIRED 4.04 - Affective Computing

NEGROPONTE

Message: 34 Date: 4.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Affective Computing

Roz Picard, a professor at MIT, believes that computers should understand and exhibit emotion. Absurd? Not really. Without the ability to recognize a person's emotional state, computers will remain at the most trivial levels of endeavor. Think about it. What you remember most about an influential teacher is her compassion and enthusiasm, not the rigors of grammar or science. Consider one of the simplest forms of affect: attention. Isn't it irritating to talk to someone as his attention drifts off? Yet all computer programs ignore such matters. They babble on as if the user were in a magical state of attentiveness.

Raising the interest rates


When kids do poorly in school it is often because they are learning things outside the curriculum, which may include how to fight or how to market sex appeal. Regardless, they are learning. Certain subjects naturally engage them. With a topic like mathematics, often the problem is not inability to learn, it's a low interest rate (alas, not the financial one). If the material is too simple, the student may be bored. If it's too difficult, the student may become frustrated. In either case, an opportunity for learning is missed - an opportunity for the "aha" that comes with discovery, coupled with a neuropeptide rush that beats anything found on the playground. The instantaneous delight on the student's face says it all - "I want to learn more!" When a child is working with a good personal tutor, that child's affective state is a key communicator. When the child gets frustrated, the tutor adjusts her approach. When the child shows increased interest, the tutor might suggest new challenging roads to explore. The tutor both recognizes and expresses emotion. In short, to be effective she must be affective. Emotional communication usually relies on tone of voice, facial expression, and body language. How many hours (days?) have you lost trying to straighten out a miscommunication that occurred via email? Of course, you didn't mean it the way it "sounded." Your tone was misunderstood. We might say email is affect-limited. Emotions, such as ;-), are a weak substitute. Affect is important; if it's missing, people tend to fill it in and often wrongly.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-04.html (1 of 3) [28-4-2001 14:09:59]

WIRED 4.04 - Affective Computing

A truly personal computer


Classic theories of emotion are inconsistent because of the absence of common affects; people are different from one another. We cannot even agree on the physiological response from person to person. Lie detectors can be fooled, yet a friend can usually catch you in a lie. Recognizing and understanding emotion is both possible and meaningful when the process relies on knowledge of a particular person. One friend may flush red when upset; another may breathe more rapidly. Sensors exist that recognize changes in facial expression, heart rate, blood pressure, and more. Add pattern recognition and learning, and a computer could begin to understand a particular person's affective state. The results will be personal, not universal, and that is the point. Wearable computers (see Wired 3.12, page 256) are part of the solution, especially when they can be placed in basic, universal items. They will not be restricted to perceiving only the visible and vocal forms of affect expression but will have the capacity to get to know you. If you wish, your wearable computer could whisper in your ear, perhaps after playing for a few too many hours with a few too many kids, "Patience, the birthday party is almost over." Interactive games might detect your level of fear and give bonus points for courage. While taking measurements of an MIT student playing Doom, we expected electromyogram (jaw-clenching) responses to peak during high-action events. Instead, the biggest peak, significantly higher than the others, occurred when the student had trouble configuring the software. What if Microsoft could access a database of affective information from people interacting with its software and modify the parts that annoy people the most?

Emotional intelligence
Unless it is used like film or music - essentially as a vehicle for human expression - expressive computing may strike you as over the edge. After all, isn't freedom from emotional vagaries one of the advantages of a computer? You certainly don't want to wait for your computer to become interested in what you have to say before it will listen. Should a computer be limited to recognizing emotions and yet be prohibited from having emotions? Too much emotion is clearly undesirable; we all know it wreaks havoc on reasoning. However, consider recent scientific findings regarding people who are essentially emotionally impaired (suffering from a tragic kind of brain injury). These people do not merely miss out on a luxurious range of feelings; they also lack basic rational decision-making abilities. The conclusion is that not enough emotion also impairs reasoning. Similarly, after decades of artificial intelligence efforts, unemotional, rule-based computers remain unable to think and make decisions. Endowing computers with the ability to recognize and express emotion is the first challenge; on its heels is a greater one - emotional intelligence.

http://www.media.mit.edu/~nicholas/Wired/WIRED4-04.html (2 of 3) [28-4-2001 14:09:59]

WIRED 4.04 - Affective Computing

For example, an affective steering wheel might sense you're angry (anger is a leading cause of automobile accidents). But what should it do? Prohibit you from driving while you, with escalating anger, rip out its sensors? Of course not. Emotional intelligence is a question of balance - a tutor reading emotional states and knowing when to encourage and when to let it rest. Until recently, computers have had no balance at all. It's time to recognize affect as a facet of intelligence and build truly affective computers. This column was co-authored with Rosalind W. Picard (rwpicard@media.mit.edu), NEC Career Development Professor of Computers and Communications at the MIT Media Lab. Next Issue: Caught Browsing Again [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.04 April 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-04.html (3 of 3) [28-4-2001 14:09:59]

WIRED 3.12 - Wearable Computing

NEGROPONTE

Message: 30 Date: 12.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Wearable Computing

The digital road warrior's kit - laptop, cell phone, PDA, and pager - is just capable enough to bother you everywhere without necessarily helping you anywhere. It's absurd that each device is still on such poor speaking terms with the others. We walk around like pack horses saddled with information appliances. We should be in the saddle, not under it.

The evolution of softwear


More than 20 years ago, The Architecture Machine Group at MIT built a mediaroom based on the idea that one should be inside a computer rather than in front of it. While that vision foreshadowed today's immersive environments, it did not go far enough and shrink the room to the size of a person. In the future, the PC will be blown to bits, many of which, naturally, should be kept near you rather than in your home or at your office. But so far, software has not been particularly soft. Though bits are as insubstantial as the ether, they tend to be packaged in hard boxes. For hardware and software to comfortably follow you around, they must merge into softwear. Developing wearable computing requires as much attention to the medium as the message. In fact, the medium becomes the massage. What single manufactured material are you exposed to the most? The answer is fabric. We wear it, stand on it, sit on it, and sleep in it. Marvelous technology goes into looms, but all we ask fabric to do is protect us from the elements, look pretty, and not wrinkle or shrink. Can't it do more? Advances in conducting polymers and reversible optical media are pointing toward fabrics that can literally become displays. Amorphous semiconductors can be used to make solar cells to power fabric. Polymer semiconductors are candidates for wearable logic. The result would be the ultimate flexible computer architecture. Perhaps the biggest decision will be whether to buy clothes from Egghead or software from Brooks Brothers. Fashion accessories will take on new roles, becoming some of the most important Internet access points, conveniently surrounding
http://www.media.mit.edu/~nicholas/Wired/WIRED3-12.html (1 of 3) [28-4-2001 14:10:01]

WIRED 3.12 - Wearable Computing

you in a Person Wide Web. How better to receive audio communications than through an earring, or to send spoken messages than through your lapel? Jewelry that is blind, deaf, and dumb just isn't earning its keep. Let's give cuff links a job that justifies their name. Footwear is particularly attractive for computing. Your shoes have plenty of unclaimed space, receive an enormous amount of power (from walking) that is currently untapped, and are ideally placed to communicate with your body and the ground. And a shoe bottom makes much more sense than a laptop - to boot up, you put on your boots. When you come home, before you take off your coat, your shoes can talk to the carpet in preparation for delivery of the day's personalized news to your glasses.

The body bus


A wearable computer will be useless if you have to walk around looking like the back of your desk. Fortunately, bits are more than skin deep. Tom Zimmerman (tz@media .mit.edu) has shown that the noncontact coupling between your body and weak electric fields can be used to create and sense tiny nano-amp currents in your body. Modulating these signals creates Body Net, a personal-area network that communicates through your skin. Using roughly the same voltage and frequencies as audio transmissions, this will be as safe as wearing a pair of headphones. Keeping data in your body avoids the intrusion of wires, the need for an optical path for infrared, and conventional problems such as regulation and eavesdropping. Your shoe computer can talk to a wrist display and keyboard and heads up glasses. Activating your body means that everything you touch is potentially digital. A handshake becomes an exchange of digital business cards, a friendly arm on the shoulder provides helpful data, touching a doorknob verifies your identity, and picking up a phone downloads your numbers and voice signature for faithful speech recognition.

Cyborgs
Cyborgs are here already. No, this isn't a paranoid fantasy about intruders from the future. Two cyborgs have been roaming the Media Lab, wearing computers day in and day out for over two years. It's an uncanny experience teaching a course to Thad Starner, who is simultaneously watching you lecture and annotating the lecture notes behind you through Reflection Technologies' Private Eye, a wearable heads-up display (the same used in Nintendo's Virtual Boy). Steve Mann goes further, wearing a completely immersive system: movable cameras connect to a local computer and a transmitter to send video to a workstation for processing and delivery back to displays in front of his eyes. This lets him enhance what he sees (he likes living in a "rot 90" rotated world) and position his eyes. (Some days he likes having his eyes above his head, or at his feet, and when he rides a bicycle he sets one eye looking forward and one backward.) He can assemble everything he's seen into larger mosaics or 3-D images, and through the radio-frequency link you can see through his eyes (at http://wwwhttp://www.media.mit.edu/~nicholas/Wired/WIRED3-12.html (2 of 3) [28-4-2001 14:10:01]

WIRED 3.12 - Wearable Computing

white.media.mit.edu/~steve/netcam.html). Don't expect to see much computing featured in Bill Blass's next collection, but this kind of digital headdress will become more common. Bear in mind that 20 years ago, no publisher anticipated that teletype terminals would grow into a portable threat to books, that paper tapes would merge with film into multimedia CD-ROMs, or that telephones would threaten the whole business model of publishing by bringing the Web into your home. The difference in time between loony ideas and shipped products is shrinking so fast that it's now, oh, about a week. This article was co-authored by Neil Gershenfeld (gersh@media.mit.edu), an MIT professor and one of three co-principal investigators of the Media Lab's newest research consortium, Things That Think. Next Issue: Where Do New Ideas Come From? [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.12 December 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-12.html (3 of 3) [28-4-2001 14:10:01]

WIRED 4.01 - Where Do New Ideas Come From?

NEGROPONTE

Message: 31 Date: 1.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Where Do New Ideas Come From?

Ideas come from people, obviously. But under what conditions are groups, corporations, and even nations most likely to foster new ideas? It's not an easy question. Many of the essentials of a fertile, creative environment are anathema to an orderly, well-run organization. In fact, the concept of "managing research" is an oxymoron. Setting short-term goals, then quickly testing to see if they will bear fruit is similarly absurd. Jerome Wiesner, former president of MIT and science advisor to President Kennedy, was fond of saying, "That's like planting a seedling and, a short while later, yanking it out to see if the roots are healthy." Ideas may come like thunderbolts, but it can take a long time to see them clearly - too long. And ideas are often born unexpectedly - from complexity, contradiction, and, more than anything else, perspective. Alan Kay, father of the personal computer (among other things), likes to say that perspective is worth 50 points of IQ (it may be worth more, Alan). Marvin Minsky, father of artificial intelligence, says that you don't know something until you know it in more than three ways. They're both quite right.

Incrementalism: creativity's enemy


As large, high-tech corporations around the world reengineer themselves by downsizing and rightsizing, the first casualty is basic research. And with good reason. The uncertainty, the riskreward ratio, and the sheer expense come at too high a price for a cost-conscious society, which includes belt-tightening managers and nearsighted shareholders. Japanese corporations have long pooled their funds with government subsidies to achieve what they call "precompetitive research," which spreads the cost but doesn't help innovation. And, as senior executives in Japan are the first to admit, no self-respecting Japanese company sends its best people to these projects. Lackluster novelty only costs less in this scheme. Homogeneous, disciplined Japanese society has all the ingredients to refine concepts better that anyone else, but none of the juices to invent new ones. Some computer science problems lend themselves perfectly to gradual improvement, or incrementalism - a step-by-step process of making something a little better each time. Verylarge-scale integration is an example: scientists have succeeded in placing finer and finer lines
http://www.media.mit.edu/~nicholas/Wired/WIRED4-01.html (1 of 3) [28-4-2001 14:10:03]

WIRED 4.01 - Where Do New Ideas Come From?

on silicon. CPUs get consistently faster for the same price, in more or less the same-sized package. Incrementalism works in this case, but as a function of local refinements, not big new ideas. On the other hand, being digital (sorry) is more global and cuts across most of life. Joel Birnbaum, the luminous head of research at Hewlett-Packard, calls future computing "pervasive": "something you do not notice until it is missing." Such research must look outward, because it's not just about the next-generation PC, it's about life. IBM and Intel, among others, have sometimes suffered from looking inward too much and growing only their own company's kind of people. None of these businesses would want Albert Einstein or Bertrand Russell in its labs - let alone running them - even though the presence of such minds would surely bring perspective and help dampen incrementalism. Companies just don't work that way.

Universities do
Research universities are a good example of a source of new ideas, but they are suffering from federal cutbacks and hence, looking for corporate support. Some faculty members and administrators complain that turning to industry for funding compromises their research, shackles researchers, and makes scholarship shortsighted - "prostitution" is a word I have heard mumbled. Boy, are they wrong. We are precisely at a time when universities can do exactly what corporations cannot do and the government should not do: foster and nurture new ideas. Let me qualify that. Government is not needed as a patron (the National Science Foundation could go away). But government may be a creative client, like a corporation, which in some ways is how the departments of energy, transportation, defense, and others work. Economic recession may be the best thing that has ever happened to university (as well as government) research, because companies have realized that they cannot afford to do basic research. What better place to outsource that research than to a qualified university and its mix of different people? This is a wake-up call to companies that have ignored universities - sometimes in their own backyards - as assets. Don't just look for "well-managed" programs. Look for those populated with young people, preferably from different backgrounds, who love to spin off crazy ideas - of which only one or two out of a hundred may be winners. A university can afford such a ridiculous ratio of failure to success, since it has another, more important product: its graduates.

Maximize the differences


The best way to guarantee a steady stream of new ideas is to make sure that each person in your organization is as different as possible from the others. Under these conditions, and only these conditions, will people maintain varied perspectives and demonstrate their knowledge in different ways. There will be a lot of misunderstanding - which is frequently not misunderstanding at all, but the root of a new idea. My advice to graduates is to do anything except what you are trained for. Take that training to a
http://www.media.mit.edu/~nicholas/Wired/WIRED4-01.html (2 of 3) [28-4-2001 14:10:03]

WIRED 4.01 - Where Do New Ideas Come From?

place where it is out of place and stimulate ideas, shake up establishments, and don't take no for an answer. This poses an interesting challenge to any research organization: be even more nimble and supportive of the unconventional, tolerate more way-out and expensive ideas, and encourage the seemingly disheveled behavior of hacker life. In the pool of knowledge at a university, professors are not the fish, but the pond. The water is not chlorinated, clear, precisely circumscribed, and inhabited by one kind of perfect goldfish. It is a muddied habitat with fuzzy edges and home to all sorts of people, including those who do not fit traditional scholarship. That is where new ideas come from. Next Issue: The Future of Books [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.01 January 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-01.html (3 of 3) [28-4-2001 14:10:03]

WIRED 4.02 - The Future of Books

NEGROPONTE

Message: 32 Date: 2.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

The Future of Books

What weighs less than one millionth of an ounce, consumes less than a millionth of a cubic inch, holds 4 million bits, and costs less than US$2? What weighs more than 1 pound, is larger than 50 cubic inches, contains less than 4 million bits, and costs more than $20? The same thing: Being Digital stored on an integrated circuit and Being Digital published as a hardcover book. The most common question I get is, Why, Mr. Digital Fancypants, did you write a book? Books are the province of romantics and humanists, not heartless nerds. The existence of books is solace to those who think the world is turning into a digital dump. The act of writing a book is evidence, you see, that all is not lost for those who read Shakespeare, go to church, play baseball, enjoy ballet, or like a good long walk in the woods. Anyway, who wants to read Michael Crichton's next book, let alone the Bible, on screen? No one. In fact, the consumption of coated and sheet paper in the United States has gone from 142 pounds per capita in 1980 to 214 pounds in 1993.

It's not a book - it's bits


The word is not going anywhere. In fact, it is and has been one of the most powerful forces to shape humankind, for both good and bad. St. Thomas said a few words in southern India almost 2,000 years ago, and today the southern province of Kerala is 25 percent Christian in a country where Christians are less than 1 percent of the population. There is no question that words are powerful, that they always have been and always will be. This back-page column, except for my loathsome picture, has never had anything in it but words. But just as we seldom carve words in rocks these days, we will probably not print many of them on paper for binding tomorrow. In fact, the cost of paper (which has risen 50 percent in the past year), the amount of human energy required to move it, and the volume of space needed to store it make books as we know them less than the optimum method for delivering bits. In fact, the art of bookmaking is not only less than perfect but will probably be as relevant in 2020 as blacksmithing is today.

http://www.media.mit.edu/~nicholas/Wired/WIRED4-02.html (1 of 3) [28-4-2001 14:10:05]

WIRED 4.02 - The Future of Books

It's not bits - it's a book


And yet books win big as an interface medium, a comfortable place where bits and people meet. They look and feel great, they are usually lightweight (lighter than most laptops), relatively low-cost, easy to use, handsomely random-access, and widely available to everyone. Why did I write a book? Because that is the display medium my audience has today. And it is not a bad one. We can "thumb" through books, annotate and dogear their pages - even sit or stand on them when we need to be a mite taller. I once stepped on my laptop, and the result was awful. The book was invented 500 years ago by Aldo Manuzio in Venice, Italy. The so-called octavo format was a departure from previous manuscripts because it was handy, portable, and pocketsize. Manuzio even pioneered page numbering. Odd how Gutenberg gets credit while Manuzio is known to only a few. Today's Manuzios are the flock of researchers looking for display materials capable of producing handy, portable, and pocket-size flat-panel displays for PDAs (personal digital assistants, a term coined by John Sculley five years ago and one of the weirdest acronyms to stick). In general, these efforts miss the point of "bookness," because the act of flipping through pages is an indisputable part of the book experience. In 1978 at MIT, we animated flipping pages on a screen and even generated fluttering sounds. Cute, but no cigar. A new effort by Joe Jacobson at the Media Lab involves electronic paper, a high-contrast, lowcost, read/write/erase medium. By binding these pulplike, electronic leaves, lo and behold - you have an electronic book. These are quite literally pages onto which you can download words, in any type, in any size. For the 15 million Americans who want large-print books, this will be a gift from heaven - if Joe succeeds during the next couple of years. So, those of you who don't want to climb into bed with "Intel inside," there is hope. This is the likely future of books.

The model said never to work


When my colleagues and I argue that the mass media of the future will be one that you "pull from" versus one that is "pushed at you," we are told: Poppycock! (Or worse.) These naysayers argue that a "pulling" model cannot be supported because it eclipses advertising. While I am not sure it is even true, let's pretend that it is and ask ourselves: What mass medium today is larger than the American TV and motion picture industries combined, has no advertising, and is truly, as George Gilder puts it, a medium of choice? The answer: Books. More than 50,000 titles are published in the United States each year. Guess the typical number of copies published per title. A major house considers 5,000 to be about the lowest run it can support economically, while some of the small houses consider 2,000 copies of a title a large run. Yes, more than 12 million copies of John Grisham's novel The Firm were printed, and the first run of Bill Gates's book was 800,000. But the average is much smaller, and these less massive books are not unimportant. They just interest or reach fewer people. So, the next time you ask yourself about the Web (which is doubling in size every 50 days) and
http://www.media.mit.edu/~nicholas/Wired/WIRED4-02.html (2 of 3) [28-4-2001 14:10:05]

WIRED 4.02 - The Future of Books

wonder what will economically support so many sites (today, one homepage is added every 4 seconds), just think books. You say to yourself, Surely most of those Web sites will go away no way. There will be more and more and, like trade books, there will be an audience for all of them. Instead of worrying about the future of the book as a pulp standard, think about it as bits for all: bestseller bits, fewer specialty-seller bits, and no-seller bits for grandparents from grandchild. Meanwhile, some of us in research are working really hard to make them feel good and be readable - something you can happily curl up with or take to the john. Next Issue: Language on the Net [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.02 February 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-02.html (3 of 3) [28-4-2001 14:10:05]

WIRED 3.11 - Being Decimal

NEGROPONTE

Message: 29 Date: 11.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Being Decimal

Like dogs, laboratories age considerably faster than people. But while dogs age at a factor of seven, I would say labs age at a factor of 10, which makes the MIT Media Lab 100 years old last month. When we officially opened our doors for business in October 1985, we were the new kids on the block, considered crazy by most. Even The New York Times called us "charlatans." While I was slightly hurt at being referred to as "all icing and no cake," it secretly pleased me because I had no doubt that computing and content would merge together into everyday life. Now, 10 years later, "multimedia" is old hat. The term appears in the names and advertising jingles of some of the most staid corporations. But becoming part of the establishment is a lot less fun than experiencing the risk and abuse of pioneering. So, how does a lab avoid sclerosis? How do we move into high-risk areas of the future after receiving acclaim and recognition for our past? The answer is intimately tied to the nature of a research university - an institution that is both a liability and an asset when you're doing research. The liability is tenure, which guarantees lifetime employment for faculty, some of whom have long forgotten their creativity. The asset is students. I'm fond of telling people that I run a company with 300 employees and 20 percent turnover each year. But it's not just new faces. The incoming lot are always between 16 and 25 years old, even though the rest of us get older each year. That 20 percent churn is the fountain of youth.

Fishing for new ideas


Where do new ideas come from? The answer is simple: differences. While there are many theories of creativity, the only tenet they all share is that creativity comes from unlikely juxtapositions. The best way to maximize differences is to mix ages, cultures, and disciplines. This has been the model at the Media Lab since Day One and it keeps many of us becoming stale. The faculty is a mix of artists, engineers, and scientists who collaborate instead of compete. In MIT style, undergraduates arrive without knowing the word "impossible." This keeps the graduates on their toes; they, in turn, keep the faculty lively. So, how do we stay fresh and do it all again? Think of fishing. You arrive at a pond that has never been fished; not surprisingly, you catch plenty. But as the number of lines grows, you will
http://www.media.mit.edu/~nicholas/Wired/WIRED3-11.html (1 of 3) [28-4-2001 14:10:14]

WIRED 3.11 - Being Decimal

catch a lot less. One has to know when it's time to find a new pond. No, we don't drop everything and start working on cold fusion or a method for turning lead into gold. The change is as much an attitude as anything else. For example, there are 30,000 Americans over the age of 100. When these centenarians were studied, researchers found that diet, exercise, and healthy living were not the common denominator or prime force behind their longevity. Instead, in reverse order of priority: they were successfully coping with loss, keeping busy, and maintaining a positive attitude. I believe the same holds true for a laboratory. In our case, we are (barely) coping with the loss of Jerome Wiesner and Muriel Cooper (see Wired 2.10, page 100), everyone is extremely busy, and our optimism is contagious.

Love is a better master than duty


That's Einstein's saying, not mine. The many visitors to the Media Lab always see different value in different projects. What looks silly to one visitor may be construed as a fundamental breakthrough by another. But they all leave with one impression that leads to the same question: Why is everyone here so passionate about the work? One answer is that we are different from corporate labs, where researchers are usually told what to do, and projects are constantly evaluated against criteria that include everything but the passion of the researcher. If I had to define my job at MIT, I would say simply that I connect passions to companies. If one side of the equation contains faculty passion but no immediate corporate interest, I play Robin Hood. And companies don't mind. But if corporate need is not matched by faculty passion, I muster all my restraint and turn down far more money than we receive. The Media Lab could be 10 times its size if we were willing to do more work in videodata compression. We're not. That's the past, not the future.

Things that think


The future is about computer understanding. It's not about pixels, but objects. It is not about ASCII, but meaning. For this reason, an incredibly difficult problem like "computers with common sense" is a major part of the future for the Media Lab. This is not a new problem, just a hard one. In fact, it's so hard it has been more or less dismissed as impossible. What better challenge is there? Another emerging theme concerns embedding computation into everyday objects that are first and foremost something else - a doorknob, a pair of sneakers, a chair, a toaster. The purpose is twofold. One is to improve the "personality" of the object - make it do what it does better. The other is for the object to perform duties that were never intended but are suited to circumstance by thinking and linking. Consider your front door. Suppose the doorknob could recognize you as you approach and could tell the door to open so you wouldn't have to put down your packages. That would be a worthy knob. It would be one whose "doorknobness" is distinctly enhanced. Now consider a
http://www.media.mit.edu/~nicholas/Wired/WIRED3-11.html (2 of 3) [28-4-2001 14:10:14]

WIRED 3.11 - Being Decimal

telephone. Telephones should never ring. If you're not there, the ringing is useless. If you are there, you'd probably prefer the phone be answered by a digital butler. If that digital butler determines that the call should be passed through, perhaps the nearest object should alert you. And that might be a doorknob. You may or may not be convinced, but we are - sufficiently so to start a major new program called "Things That Think" on the occasion of our 10th birthday. An important component of the research is wearable computing. If this sounds silly to you, all the better. Ten years ago, "media convergence" was also considered silly. Tune back in when we are 20. Or rather, 200. Next Issue: Wearable Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.11 November 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-11.html (3 of 3) [28-4-2001 14:10:14]

WIRED 2.10 - Sensor Deprived

NEGROPONTE

Message: 16 Date: 10.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


Proof of Presence

Sensor Deprived

When it comes to sensing human presence, computers aren't even as talented as those modern urinals that flush when you walk away. You can lift your hands from a computer's keyboard (or even between keystrokes) and your computer does not know whether the pause is for momentary reflection, or for lunch. We give a great deal of attention to human interface today, but almost solely from the perspective of making it easier for people to use computers. It may be time to reverse this thinking and ask how to make it easier for computers to deal with people. A recent Media Lab breakthrough by Professor Neil Gershenfeld solves a range of user interface problems with a few dollars of hardware. A varying electric field induces a small (nanoamp) current in a person that can be measured to locate the person in the field, making it is possible to build smart appliances and furniture that remotely and unobtrusively locate fingers or hands in 2-D or 3-D, bodies in chairs, or people in rooms. Another way for computers to sense human presence is through computer vision - giving machines the ability to see. Companies like Intel are now manufacturing low-cost hardware that eventually will lead to an embedded video camera above the screen of almost every desktop and laptop computer. This makes it possible for humans to telecommute and to collaborate visually from a distance. The computer could use that same camera to look at its user. Furthermore, machine vision could be applied to sensing and recognizing smiles, frowns, and the direction of a person's gaze, so that computers might be more sensitive to facial expression. Your face is, in effect, your display device; it makes no sense for the computer to remain blind to it. I am constantly reminded of the tight coupling between spoken language and facial expression. When we talk on the telephone, our facial expressions are not turned off just because the person at the other end cannot see them. In fact, we sometimes contort our faces even more to give greater emphasis and prosody to spoken language. By sensing facial expressions, the computer could access a redundant, concurrent signal that enriches the spoken or written message.
http://www.media.mit.edu/~nicholas/Wired/WIRED2-10.html (1 of 3) [28-4-2001 14:10:16]

WIRED 2.10 - Sensor Deprived

Of Mice and Men


A mouse is one of the most absurd input devices. "Mousing around" requires four steps: 1) moving your hand to find the mouse, 2) moving the mouse to find the cursor, 3) moving the cursor to where you want it, and 4) clicking or double-clicking the button. Apple's innovative design of the new PowerBooks at least reduces these steps to three and has the "dead mouse" where your thumbs are anyway, so that typing interruptions are minimized. Where mice and trackballs really fall apart is in drawing. I defy you to sign your name with a trackball. This is where tablet technology, which has been moving more slowly down the costreduction curve, plays an important role. Nonetheless, few computers have a data tablet of any sort. Those that do present the problem of situating the tablet and keyboard, both of which compete for centrality, near the display. The clash is usually resolved by putting the keyboard below the display because only a few people touch type.

High-Touch Computing
The dark horse in graphical input is the human finger. This is quite startling, considering the human finger is a device you don't have to pick up. You can move gracefully from typing (if typing has grace) to pointing, from horizontal plane to vertical. Why hasn't this caught on? Some of the limp excuses follow: - You occlude that which is beneath your finger when you point at it. True, but that happens with paper and pencil, as well, and has not stopped the practice of handwriting or of using a finger to identify something on hardcopy. - Your finger is low resolution. False. It may be stubby, but it has extraordinary resolution when the ball of the finger tip touches a surface. Ever so slight movement of your finger can position a cursor with extreme accuracy. - Your finger dirties the screen. But it also cleans the screen. One way to think about touchsensitive displays is that they will be in a kinetic state of more or less invisible filth, where clean hands clean and clammy ones dirty. The real reason for not using fingers is, in my opinion, quite different. With just two states touching or not touching - many applications are awkward at best. Whereas, if a cursor appeared when your finger was within, say, a quarter of an inch of the display, then touching the screen would be like the multi-states of a mouse click or data tablet. With such "nearfield" finger-touch, I promise you, we would see many touch-sensitive displays.

Eyes as Output
Eyes are classically studied as input devices. The study of eyes as output is virtually unknown. Yet, if you are standing 20 feet away from another person, you can tell if that person is looking
http://www.media.mit.edu/~nicholas/Wired/WIRED2-10.html (2 of 3) [28-4-2001 14:10:16]

WIRED 2.10 - Sensor Deprived

right in your eyes or just over your shoulder - a difference of a tiny fraction of a degree. How? It surely isn't trigonometry, wherein you are computing the angle of the other person's pupil and then computing whether it is in line with your own gaze. No. That would require unthinkable measurement and computation. There is some kind of message passing, maybe a twinkle of the eye, which we just don't understand. We constantly point with our eyes and would find such computer input valuable. Imagine reading a computer screen and being able to ask: What does "that" mean, Who is "she," How did it get "there?" "That," "she," and "there" are defined by your gaze at the moment, not some clumsy elaboration. It makes perfect sense that your question concerns the point of eye contact with the screen and, to reply, the computer must know the precise point. In fact, when computers can track the human eye at a low cost, we are sure to see an entire vocabulary of eye gestures. When that happens, human-computer interaction will be far less sensor deprived and more like face-to-face communication, and be far better for it. Next Issue: Digital Etiquette [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.10 October 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-10.html (3 of 3) [28-4-2001 14:10:16]

WIRED 2.11 - Digital Etiquette

NEGROPONTE

Message: 17 Date: 11.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


All Fingers

Digital Etiquette

Imagine the ballroom of an Austrian castle during the 18th century, in full gilded splendor, glittering with the reflected light of hundreds of candles, Venetian mirrors, and jewels. Four hundred handsome people waltz gracefully to a 10-piece orchestra. Now imagine the same setting, but with this change: 390 of the guests learned how to dance the night before, and they are all too conscious of their feet. This is similar to the Internet today: most users are all fingers. The vast majority of Internet users are newcomers. Most have been on it for less than a year. Their first messages tend to flood a small group of select recipients, not only with page after page, but with a sense of urgency suggesting the recipient has nothing else to do. Worse, it is so simple and cost-free to forward copies of documents that a single hit of the Return key can dispatch 15 or 50,000 unwelcome words into your mailbox. That simple act turns e-mail from a personal and conversational medium into dumping; it is particularly distressing when you are connected over a narrow link. Some of us who have been on the Internet or its predecessors for a long time (a quarter of a century, in my case) pride ourselves on being available. The e-mail address above is my real email address, and I make every effort to answer everything I receive. Therefore, I feel a right to be opinionated about its abuse as a communications medium. Netiquette is particularly important to me because I use e-mail during many hundreds of thousands of miles of travel each year, from foreign lands, in strange places, through weird positions (usually caused by an unfriendly telephone booth or hidden phone jack). One result is that I often see my e-mail at low and heavily error-prone bit rates. This strengthens e-character. One journalist commissioned to write about these newcomers and their inconsiderate use of the Internet researched his story by sending me and others a four-page questionnaire - without asking first and without the slightest warning. His story should have been a self-portrait. Common courtesy suggests a short introductory request - as opposed to the wholesale and presumptuous delivery of questions.

http://www.media.mit.edu/~nicholas/Wired/WIRED2-11.html (1 of 3) [28-4-2001 14:10:19]

WIRED 2.11 - Digital Etiquette

In general, however, e-mail can be a terrific medium for both the reporter and the reported. Email interviews are far more satisfying for people like me, because replies can be considered at leisure. They are less intrusive and allow for more reflection. I am convinced that e-interviews will happen more and more, ultimately becoming a standard tool for journalism around the world, provided that reporters can learn some manners.

Ugly Habits
Some of the ugliest digital behavior results from having plentiful bandwidth and using it with careless abandon. I am convinced that the best way to be courteous with alphanumeric e-mail on the Net is to assume the receiver of the message has a mere 1200 baud and only a few moments of attention. An example of the contrary (a habit practiced to my alarm by many of the most seasoned users I know) is returning a full copy of my message with a reply. That is perhaps the laziest way to make e-mail meaningful and it is a killer if the message is long (and the channel thin). It takes so little effort to weave referents into an answer or cut and paste a few relevant pieces. The opposite extreme is even worse, such as the reply "Sure." Sure, what? Similarly, the use of undefined pronouns is irksome when they refer to an earlier message. As distinguished from spoken conversation, e-mail has variable chunks of time (and space) between segments. The worst of all digital habits, in my opinion, is the gratuitous "cc" which, among other things, gives new meaning to the word "carbon." It has scared off many senior executives from being on-line. The big problem with electronic cc's is that they can multiply themselves, because replies are all too frequently sent to the entire cc list. If a person is organizing an impromptu international meeting and invites 50 people to attend, the last thing I want to see is the travel arrangements of the other 49.

Never Do E-Mail through a Secretary


Some of my closest colleagues claim to be fully available on e-mail. What they mean is that a secretary prints out messages and transcribes dictation. (A senior member of the MIT computer science community gave me the limp excuse: "I can speak faster than I can type." Well, I can, too.) Using a secretary is hardly the equivalent of being online, and it reduces e-mail to the state of being no more than a fast post office. Sitting at the keyboard yourself and staring at the message (either received or to-be-sent) is a process that engages a different ethos - a certain politeness, some humility, and an ability to be involved in a fashion only one step removed from a real conversation. You are accessible in a new and different way. (Senior management take note.) It is so easy to send a short and kind reply that I find myself doing so all the time to people who would never get through the forest of secretaries who guard me from telephone calls and
http://www.media.mit.edu/~nicholas/Wired/WIRED2-11.html (2 of 3) [28-4-2001 14:10:19]

WIRED 2.11 - Digital Etiquette

manage my meetings. Consider the total time required for me to dictate a short letter (which I do sometimes), to have it typed, to proof it, to sign it, and to have it posted (or, forbid, faxed). The elapsed time is surely no less than 20 minutes of total human time (probably more). By contrast, I can answer the same by e-mail in less than 20 seconds. My e-mail box is not polluted. (This column may end that.) The reason, I believe, is that people really don't want to foul their own doorstep. At the Media Lab my e-mail responsiveness is a family joke: never more than a few hours, 365 days a year. People are careful not to abuse my accessibility, because it is like an open door. If there is too much noise outside, it is easy to shut it. Wired e-mail is usually considered and interesting, and I learn a great deal from it. (But often it is too long.) If you are a newcomer to this medium, remember that some others are not and may live and die by it. The best netiquette advice I can offer you is be brief. Next Issue: Digital Expression [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.11 November 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-11.html (3 of 3) [28-4-2001 14:10:19]

WIRED 2.12 - Digital Expression

NEGROPONTE

Message: 18 Date: 12.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


Computing as Photography

Digital Expression

Jerome Wiesner, former president of MIT and co-founder of the Media Lab, tells a story about Vladimir Sworkin, who visited him one Saturday at the White House when Wiesner was John Kennedy's science advisor. He asked Sworkin if he had met the president. As Sworkin had not, Wiesner took him across the hall and introduced him as "the man who got you elected." Startled, the President asked, "How is that?" Wiesner explained: "This is the man who invented television." Kennedy replied, "How terrific. What an important thing to have done," to which Sworkin wryly commented: "Have you seen television recently?" Technological imperatives, and only those imperatives, drove the development of TV. Then it was handed off to a body of creative talent with different values and from a different intellectual subculture. Photography, on the other hand, was invented by photographers. The people who perfected photographic techniques did so for their own expressive purposes, fine-tuning the technology to meet the needs of their art. Means and messages were deeply intertwined. Personal computers have moved computer science away from the purely technical imperative and are evolving more like photography. Computing is being channeled directly into the hands of very creative individuals at all levels of society and is becoming the means for creative expression in both its use and development. The means and messages of multimedia will become a blend of technical and artistic achievement.

Music as Fly Paper


Music is one example. During the Media Lab's early days, MIT colleagues advised me to avoid computer music. They said: "Nicholas, MIT thinks multimedia is sissy science; including music will just put the nail in the coffin." To me their remarks were a code: do it. Ten years later, music has proven to be one of the most important shaping forces for the Media Lab. Music can be viewed from three diverse but complementary perspectives, each as powerful as the others. Music can be considered from the digital signal processing point of view, including such difficult problems as sound separation (like taking the noise of a fallen Coke can out of a
http://www.media.mit.edu/~nicholas/Wired/WIRED2-12.html (1 of 3) [28-4-2001 14:10:21]

WIRED 2.12 - Digital Expression

music recording). Or it can be considered from the perspective of musical cognition: how do we interpret the language of music, what constitutes appreciation, and where does emotion come from? Finally, music can be treated as artistic expression, with a story to be told and feelings to be aroused. The point is that all three are important in their own right and allow the domain music - to be the perfect intellectual landscape for moving gracefully between science and art. The traditional kinship between mathematics and music is multiplied manyfold within the hacker community, which tends to be musically inclined, if not gifted. Even if music is not a student's professional objective, it satisfies an often important need for avocation. This can be generalized because many avocations are needlessly subordinated by parental and social forces, when they could be vehicles for more meaningful, deeper learning. The concept of a hobby is subject to great change in digital life. While it is used to mean an extracurricular passion, in the digital world such hobbies can be part of the toys with which we think and the tools with which we play. The computer provides a complete range of points of entry to music and does not limit access to the prodigious child, nor to those who are sufficiently disciplined or genetically inclined.

The Return of the Sunday Painter


Painting is another example. A refrigerator door with a child's drawing attached to it is as wholesome as apple pie. We encourage our children to be expressive and to make things. But when they reach age 6 or 7, we switch gears on them. We leave them with the impression that art class is at best like baseball (a hobby) and at worst for wimps. And for the next 20 years we feed their left brains like a Strasbourg goose, leaving the right sides to catch as catch can - or shrivel into a pea. Seymour Papert tells the story of a mid-19th-century surgeon magically transported through time into a modern operating theater. This doctor would not recognize a thing, would not know what to do or how to help. Modern technology has transformed the practice of surgical medicine. By contrast, if a mid-19th-century school teacher were carried by the same time machine into a present-day classroom, that teacher could be a substitute teacher today, more or less picking up where his or her late-20th-century peer left off. There is no fundamental difference between the way we teach today and the way we did 150 years ago. The technology is almost the same. This sort of change is slow. I believe that's because it's deep - deeper than most people think. We are moving away from a hard-line mode of teaching that caters primarily to compulsive, serialist children, toward one that is more porous and draws no lines between art and science or right brain and left brain. When children use the Logo program language to make pictures on their computer screens, those images are at once artistic expression and mathematical expression, seen as both or either.
http://www.media.mit.edu/~nicholas/Wired/WIRED2-12.html (2 of 3) [28-4-2001 14:10:21]

WIRED 2.12 - Digital Expression

What was once only an abstract concept - like math - now has a window into it that has many components from the visual arts. What this means by extension is that computers will make our future adult population much more visually literate and artistically able than today. Ten years from now, teenagers are likely to enjoy a much richer panorama of options because the pursuit of intellectual achievement will not be tilted in favor of bookworms but cater to a range of expressive tastes. "The Return of the Sunday Painter," the title of a chapter I contributed to The Computer Age: A Twenty-Year View more than two decades ago, is meant to suggest a new era of respect for avocations and a future with more active engagement in making, doing, and expressing. My belief in this comes from watching computer hackers, both young and old. Their programs are like paintings: they have aesthetic qualities and are shown and discussed in terms of their meaning from many perspectives. Their programs include behavior and style that reflect their makers. These people are the forerunners of the new expressionists. Next Issue: Bits and Atoms [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.12 December 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-12.html (3 of 3) [28-4-2001 14:10:21]

WIRED 3.01 - Bits and Atoms

NEGROPONTE

Message: 19 Date: 1.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


The $400 Limit Applies to Atoms Only

Bits and Atoms

When returning from abroad, you must complete a customs declaration form. But have you ever declared the value of the bits you acquired while traveling? Have customs officers inquired whether you have a diskette that is worth hundreds of thousands of dollars? No. To them, the value of any diskette is the same - full or empty - only a few dollars, or the value of the atoms. I recently visited the headquarters of one of the United States's top five integrated-circuit manufacturers. I was asked to sign in and, in the process, was asked whether I had a laptop computer with me. Of course I did. The receptionist asked for the model, serial number, and the computer's value. "Roughly US$1 to $2 million," I said. "Oh, that cannot be, sir," she replied. "What do you mean? Let me see it." I showed her my old PowerBook (whose PowerPlate makes it an impressive 4 inches thick), and she estimated its value at $2,000. She wrote down that amount and I was allowed to enter. Our mind-set about value is driven by atoms. The General Agreement on Tariffs and Trade is about atoms. Even new movies and music are shipped as atoms. Companies declare their atoms on a balance sheet and depreciate them according to rigorous schedules. But their bits, often far more valuable, do not appear. Strange.

Atoms Are Judged Less Greene than Bits


When Judge Harold Greene broke up AT&T in 1983, he told the newly created regional Bell operating companies that they could not be in the information business. Who did he think he was fooling? The seven sisters were already in the information business and doing just fine, thank you. Their largest margins were (and still are) from the Yellow Pages, which they have sold at great profit. Judge Greene, sir, the companies are and always have been in the information industry. What are you talking about? What the judge is saying is that the companies have every right to kill thousands of trees, to

http://www.media.mit.edu/~nicholas/Wired/WIRED3-01.html (1 of 3) [28-4-2001 14:10:23]

WIRED 3.01 - Bits and Atoms

litter our homes, and to fill garbage sites with their information business, as long as this information is in the form of atoms - paper hurled over the transom. But as soon as the companies deliver the exact same information with no-deposit, no-return, environmentally friendly bits, they have broken the law. Doesn't that sound screwy? Was anyone thinking about the meaning of "being digital" during the time that AT&T was being disassembled? I fear not.

Pay per View


During a speech I gave at a recent meeting of shopping center owners, I tried to explain that a company's move into the digital future would be at a speed proportionate to the conversion of its atoms to bits. I used videocassette rental as an example, since these atoms could become bits very easily. It happened that Wayne Huizenga, Blockbuster's former chairman, was the lunch speaker. He defended his stock by saying, "Professor Negroponte is wrong." His argument was based largely on the fact that pay-per-view TV has not worked because it commands such a small piece of the market. By contrast, Blockbuster can pull Hollywood around by the nose, because video stores provide 50 percent of Hollywood's revenues and 60 percent of its profits. I thought about Huizenga's remark and realized that this extraordinary entrepreneur did not understand the difference between bits and atoms. His atoms - videocassettes - prove that video-on-demand will work. Videocassettes are pay-per-view TV. The only difference is that in his business he can draw as much as one-third of the profits from late fees.

Library of the Future


Thomas Jefferson introduced public libraries as a fundamental American right. What this forefather never considered was that every citizen could enter every library and borrow every book simultaneously, with a keystroke, not a hike. All of a sudden, those library atoms become library bits and are potentially accessible to anyone on the Net. This is not what Jefferson imagined. This is not what authors imagine. Worst of all, this is not what publishers imagine. The problem is simple. When information is embodied in atoms, there is a need for all sorts of industrial-age means and huge corporations for delivery. But suddenly, when the focus shifts to bits, the traditional big guys are no longer needed. Do-it-yourself publishing on the Internet makes sense. It does not for paper copy.

Markoff-on-Production
It was through The New York Times that I came to know and enjoy the writing of computer and communications business reporter John Markoff. Without The New York Times, I probably would not have been introduced to him. However, now it would be far easier for me to collect

http://www.media.mit.edu/~nicholas/Wired/WIRED3-01.html (2 of 3) [28-4-2001 14:10:23]

WIRED 3.01 - Bits and Atoms

his new stories automatically and drop them into my personal newspaper or suggested reading file. I would be willing to pay Markoff 5 cents for each of his new pieces. If one-fiftieth of the 1995 Internet population subscribed to this idea, and Markoff wrote 20 stories a year, he would earn $1 million, which I am prepared to guess is more than The New York Times pays him. If you think one-fiftieth is too large a percentage, then wait awhile. Once someone is established, the added value of a distributor becomes less and less in a digital world. The distribution and movement of bits is much easier than atoms. But delivery is only part of the issue. A media company is, among other things, a talent scout, and its distribution channels, bits or atoms, provide a test bed for public opinion. But after a certain point, the author may not need this forum. In the digital age, WIRED authors can sell their stories direct and make more money, once they are discovered. While this does not work today, it will work very well, very soon - when "being digital" becomes the norm. Next Issue: Being Digital [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.01 January 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-01.html (3 of 3) [28-4-2001 14:10:23]

WIRED 3.02 - Being Digital - A book (p)review

NEGROPONTE

Message: 20 Date: 2.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


The Paradox of a Book

Being Digital - A book (p)review

When I agreed to write the back page for WIRED, I had no idea what it would entail. I encountered many surprises. The biggest by far was my discovery that the magazine readership included a wide range of people, not just those with an @ behind their name. When I learned that kids were giving WIRED to their parents as Christmas presents, I was blown away. There seems to be a huge thirst to understand computers, electronic content, and the Net as a culture - not just as a technology. For this reason, and with encouragement from many readers (both rants and raves), I decided to repurpose my WIRED columns into a book entitled Being Digital, which comes out the first of February. The idea sounded simple in June - but 20 stories don't necessarily string together into one book, even if they happen to be pearls. More important, so much has changed so quickly that the future-looking early stories have become old hat. To my surprise, one thing that held up from the beginning was that the columns used words alone - no pictures. That seemed to work. As one of the inventors of multimedia, I found it ironic that I never use illustration. Furthermore, as a believer in bits, I had to reconcile myself to the idea that my publisher, Knopf, would be shipping mere atoms around.

Bits Are Bits


But I did learn a few things as Imined my columns for the themes that run throughout Being Digital. The first is that bits are bits, but all bits are not created equal. The entire economic model of telecommunications -based on charging per minute, per mile, or per bit - is about to fall apart. As human-to-human communications become increasingly asynchronous, time will be meaningless (five hours of music will be delivered to you in less than five seconds). Distance is irrelevant: New York to London is only five miles further than New York to Newark via satellite. Sure, a bit of Gone with the Wind cannot be priced the same as a bit of e-mail. In fact, the expression "a bit of something" has new and enormous double meaning. Furthermore, we are clueless about the ownership of bits. Copyright law will disintegrate. In the
http://www.media.mit.edu/~nicholas/Wired/WIRED3-02.html (1 of 3) [28-4-2001 14:10:25]

WIRED 3.02 - Being Digital - A book (p)review

United States, copyrights and patents are not even in the same branch of government. Copyright has very little logic: you can hum "Happy Birthday" in public to your heart's delight, but if you sing the words, you owe a royalty. Bits are bits indeed. But what they cost, who owns them, and how we interact with them are all up for grabs.

Interface - Where Bits and People Meet


You cannot experience a bit. It must be turned back into atoms for human beings to enjoy it. While the process of converting bits to atoms has become sensory-rich, the reverse direction turning atoms into bits - is almost desolate. Human input to machines is Paleolithic and keeps most parents and many of our friends from being wired. The bottlenecks are speech (long overdue) and vision (normally not considered). What I realized in writing this part of the book is that a happy coincidence is staring us in the face (so to speak). Many companies, notably Intel (which is very vocal about many things), are pushing desktop videoconferencing. The result is that sooner rather than later, we will have a growing population of machines with solid-state television cameras at the top of the screen and built-in microphones at the bottom. While this design has been conceived to pass your voice and a picture of your face to a remote and similar machine, it could serve handsomely as a direct feed into your computer - not a teleconference but a local conference with your machine. So please, Intel, make sure that audio and video are processable, so my machine sees my face and hears my voice. On occasion, I really do want to be in a society of one.

Digital Life
Here is where my optimism may have gotten in the way; I guess I have too many of those O (for optimistic) genes. But I do believe that being digital is positive. It can flatten organizations, globalize society, decentralize control, and help harmonize people in ways beyond not knowing whether you are a dog. In fact, there is a parallel, which I failed to describe in the book, between open and closed systems and open and closed societies. In the same way that proprietary systems were the downfall of once great companies like Data General, Wang, and Prime, overly hierarchical and status-conscious societies will erode. The nation-state may go away. And the world benefits when people are able to compete with imagination rather than rank. Furthermore, the digital haves and have-nots will be less concerned with race or wealth and more concerned (if anything) with age. Developing nations will leapfrog the telecommunications infrastructures of the First World and become more wired (and wireless). We once moaned about the demographics of the world. But all of a sudden we must ask ourselves: Considering two countries with roughly the same population, Germany and Mexico, is it really so good that less than half of all Germans are under 40 and so bad that more than half of all Mexicans are under 20? Which of those nations will benefit first from "being digital"?
http://www.media.mit.edu/~nicholas/Wired/WIRED3-02.html (2 of 3) [28-4-2001 14:10:25]

WIRED 3.02 - Being Digital - A book (p)review

And You Don't Even Have to Read It


One of the many things I learned is that publishers will release simultaneously an audio version of the book. I discovered this at the same time I learned I was expected to read it. Being dyslexic, even with my own words, I refused. Then I asked Knopf if Penn Jillette (see WIRED's September cover) could do it. Penn is one of the coolest people I know, and I felt he would bring all sorts of magic to the process. During the time when I thought Knopf was stewing on this wild idea, they were in fact asking Penn, before I could even get to him. He graciously agreed. All he asked me by e-mail was, "Are there any hard words?" No, there are none. Next Issue: 000 000 111 [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.02 February 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-02.html (3 of 3) [28-4-2001 14:10:25]

WIRED 3.03 - 000 000 111 - Double Agents

NEGROPONTE

Message: 21 Date: 3.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

000 000 111 Double Agents

When you delegate the tasks of mowing your lawn, washing your car, or cleaning your suit, very little privacy is at stake. By contrast, when you hand over the management of your medical, legal, or financial affairs to another human, the performance of those tasks depends on your willingness to reveal very private and personal information. While oaths and laws may protect some confidentialities, there is no real regulatory shield against the leaking of intimate knowledge by human assistants. That is achieved solely through trust and mutual respect. In the digital world, such high regard and real confidence will be more difficult to accomplish, given the absence of actual or inferred values in a nonhuman system. In addition, a society of electronic agents will be able to communicate far more efficiently than a collection of human cooks, maids, chauffeurs, and butlers. Rumors become facts and travel at the speed of light. Since I constantly argue in articles and lectures that intelligent agents are the unequivocal future of computing, I'm always asked about privacy. However, the question is usually posed without a thorough appreciation of how serious an issue privacy is. As many of my speeches are delivered to senior executives in fancy resorts, I sometimes announce that I have arranged with the hotel management to receive a list of the movies watched by members of the (usually) male-dominated audience in their rooms the night before. As half the faces in the audience turn red, I admit I am joking. But no one is laughing. It's quite telling, but not that funny. All of a sudden, our smallest actions leave digital trails. For the time being, these "bit-prints" are isolated instances of very small parts of our lives. But over time, they will expand, overlap, and intercommunicate. Blockbuster, American Express, and your local telephone company can suddenly pool their bits in a few keystrokes and learn a great deal about you. This is just the beginning: each credit-card charge, each supermarket checkout, and each postal delivery can be added to the equation. Extrapolate this trend and, sooner or later, you are but an approximation of your own computer model. Does this bother you?

Hermes and hermetics


It doesn't bother me, and this is why. The data concerning whom I called, what I watched, and
http://www.media.mit.edu/~nicholas/Wired/WIRED3-03.html (1 of 3) [28-4-2001 14:10:27]

WIRED 3.03 - 000 000 111 - Double Agents

where I ate is not very interesting in comparison with either why I did so or any consequential information from my doing so (I liked the meal, my guest liked it, or neither of us liked it, but didn't want to admit it). The fact that I ate someplace is almost meaningless if the intent and the result are unknown. Purpose, intent, and subsequent feelings are far more important than the action or choice itself. I leave only a few digital crumbs for the direct-marketing community by revealing, for example, that I dined somewhere. The interesting data is held by the agent who made the reservation and later asks me how the evening went. Today, marketers reverse-engineer a consumer's choice to infer why a decision was made. Advertisers cluster such demographics to further guess whether I might be inclined to purchase one soap flake versus another. Tomorrow, this will change. We can opt to tell a computer agent what we want, when we want it, and, therefore, how to build a model of us - the collective reasoning of the past, present, and future (as far as we know it). Such agents could screen and filter information and anonymously let the digital marketplace know that we are looking for something. Two kinds of agents will exist in that scenario: one will stay at home (on your wrist, in your pocket, in your radio) and one will live on the Net, surfing on your behalf, carrying messages back and forth. To some degree, the homebodies can be hermetically sealed. They will read bit streams about products and services broadcast in abundance through wired and wireless channels. They will scoop off subsets of information of personal interest - an act as simple as grabbing a stock quote for you, and as complicated as determining your interest in a segment of a talk show. These agents will be "all ears." Messenger agents will be more complicated. They will function as we do today when they cruise the Net looking for interesting things and people. We are at a time in history when the Net is sufficiently small for some to believe that Mosaic and other browsing tools are the only future. They are not. Even today, the people surfing the Net are distinguished by having the time to do so. In the future, there will be almost as few humans browsing the Net as there are people using libraries today. Agents will be doing that for most of us. These Net-dwelling agents are the ones we need to worry about when it comes to privacy. They need to be tamper-proof, and we must find ways to preclude new forms of kidnapping (agent-napping). Sounds silly? Just wait until the courts begin to agonize over whether intelligent agents can testify against us.

Clipper ships
Security and privacy are deeply interwoven. The government is asking us to sail in an ocean of data, but it wants the ability to board our (Clippered) ships at any time. This has outraged the digerati and has become the object of enormous debate in WIRED and other places. I yawn. This is why.

http://www.media.mit.edu/~nicholas/Wired/WIRED3-03.html (2 of 3) [28-4-2001 14:10:27]

WIRED 3.03 - 000 000 111 - Double Agents

Encryption is not limited to a single layer. If I want to send you a secret message, I promise you that I can, without any risk of anyone else being able to decode it. I simply place an additional layer of encryption on top of the data, using an unbreakable code. Such codes need not be the wizardry of mathematicians or the result of massive electronics, but can be simple but secure. To prove this, I have put 105 rows of 12 bits on the spine of my book, Being Digital. These bits contain a message. I bet that you will never be able to decode it. If classrooms of hotshot math students want to try, be my guest. WIRED magazine will honor you at great length. But don't spend too much time. It is not nearly as easy as the title of this story: James Bond. Next Issue: The Balance of Trade of Ideas [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.03 March 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-03.html (3 of 3) [28-4-2001 14:10:27]

WIRED 3.04 - The Balance of Trade of Ideas

NEGROPONTE

Message: 22 Date: 4.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

The Balance of Trade of Ideas

A December 19, 1990, front-page story in The New York Times, "MIT Deal with Japan Stirs Fear on Competition," accused the Media Lab of selling out to the Japanese. This news flash concerned the 1986 endowment from a Japanese industrialist who wished to provide, through an affiliation of five years, his alma mater with the seeds of basic research in new media. Believe me, you never want to be on the front page of The New York Times. I did not realize the degree to which such an appearance becomes news unto itself, as well as fodder for derivative stories. Newsday wrote an editorial based on that story less than a week later, called "Bye Bye High Tech," without checking any of the details. 1990 marked a peak in US scientific nationalism. American competitiveness was crumbling, the deficit was rising, and we were no longer Number One at everything. So for goodness sake, Nicholas, the editorials implored, don't tell the world how to do software, especially multimedia, something the United States pioneered and dominated. Well, it doesn't work that way, especially in an era when computing is no longer limited to the large institutions and nations that can afford it. What particularly irked me was the notion that ideas should be treated like automobile parts, without any understanding of where they come from or how they evolve. Ironically, this particular case of seemingly unpatriotic behavior related to the field of consumer electronics, where hardware had long been abandoned by American industry. Zenith, one of the most vocal critics at the time, doesn't even build TV sets in the United States, while Sony manufactures products in San Diego and Pittsburgh that are sold domestically as well as exported throughout the world. Odd, isn't it?

Damned if you do, damned if you don't


When I posed the question, "Isn't it better to create jobs (like Sony) than to own offshore factories (like Zenith)?" some of my most distinguished MIT colleagues replied that ownership was power, and, in the end, the Japanese would keep all the "good" jobs in Japan and leave only menial-task positions in the US. I thought hard about this logic. Shortly afterward, NEC Corporation was criticized by the American press for setting up a basic-research laboratory in
http://www.media.mit.edu/~nicholas/Wired/WIRED3-04.html (1 of 3) [28-4-2001 14:10:29]

WIRED 3.04 - The Balance of Trade of Ideas

Princeton, New Jersey, where 100 people (95 percent of them US citizens) are engaged in fundamental science - "good" jobs. But now, that was bad too, maybe worse, because Japan would run away with our creative skills, getting the goose and the golden eggs. This is silly! New ideas come from differences. They come from having different perspectives and juxtaposing different theories. Incrementalism is innovation's worst enemy. New concepts and big steps forward, in a very real sense, come from left field, from a mixture of people, ideas, backgrounds, and cultures that normally are not mixed. For this reason, the global landscape is the most fertile ground for new ideas.

Global cottage research


During the recent past, a prerequisite for being global was being big. This applied to countries, to companies, and, in a sense, to people. Big nations took care of smaller countries, huge corporations were the multinationals, and the rich were the internationals. Today, this paradigm is changing, and this change will have a huge effect on the world trade of ideas. In the world of bits, you can be small and global at the same time. In the early days of computing, only a few institutions owned tools to think with, like linear accelerators. Many of the players were in debt to the few who could afford the luxury of science. They poached on the basic research provided by those who had the equipment to do it. Today, a US$2,000 100-MHz Pentium PC has more power than MIT's central computer had when I was a student. In addition, so many peripherals are being manufactured at consumer prices, everyone can play in the multimedia and human-interface arena. This means individuals or researchers from developing nations can now contribute directly to the world's pool of ideas. Being big does not matter. For these reasons, more than ever before, we must trade ideas, not embargo them.

Reciprocity on the Net


The Net makes it impossible to exercise scientific isolationism, even if governments want such a policy. We have no choice but to exercise the free trade of ideas. I once got angry with the people who said that American tax dollars spent on basic research should go to American companies - and I got angrier when racism reared its ugly head. It was OK to do business with RCA (100 percent owned by the French government) but not OK to collaborate with the many Japanese companies that know a lot more about consumer electronics than we do. Now I see the problem differently. The Net has forced such open exchange, with or without government sanction, that the onus is on other governments, especially those in developing countries, to change their attitudes. For example, newly industrialized nations can no longer pretend they are too poor to reciprocate with basic, bold, and new ideas. Before the Net existed, scientists shared their knowledge through scholarly journals, which
http://www.media.mit.edu/~nicholas/Wired/WIRED3-04.html (2 of 3) [28-4-2001 14:10:29]

WIRED 3.04 - The Balance of Trade of Ideas

often published papers over a year after they were submitted. Now that ideas are shared almost instantly on the Net, it is even more important that Third World nations not be idea debtors - they should contribute to the scientific pool of human knowledge. It is too simple to excuse yourself from being an idea creditor because you lack industrial development. I have heard many people outside the United States tell me that they are too small, too young, or too poor to do "real" and long-term research. Instead, I am told, a developing nation can only draw from the inventory of ideas that comes from wealthy countries. Rubbish. In the digital world, there should not be debtor nations. To think you have nothing to offer is to reject the coming idea economy. In the new balance of trade of ideas, very small players can contribute very big ideas. Next Issue: Bill of Writes [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.04 April 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-04.html (3 of 3) [28-4-2001 14:10:29]

WIRED 3.05 - A Bill of Writes

NEGROPONTE

Message: 23 Date: 5.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


Dear Newt,

A Bill of Writes

Your support of the digital age is deeply appreciated. As we move from a world of atoms to one of bits, we need leaders like you explaining that this revolution is a big one, maybe a 10.5 on the Richter scale of social change. Alvin and Heidi Toffler are dandy advisors; good for you for listening to them! The global information infrastructure needs a great deal of bipartisan cooperation, if only to help (read: force) other nations to deregulate and privatize their telecommunications. As you reach out across the world to evangelize the information age, people will listen. However, there is something specific you could do for the digital revolution in your own congressional backyard, a few hundred feet from the Capitol building - perhaps something that has never been considered. Congress controls the world's largest library - it receives more than 30,000 items per day. Of these, perhaps 8,000 are saved. The Library of Congress is, quite frankly, out of shelf space - even if one includes the overflow cave it shares with Harvard University. The library, your library, is a giant dumpster full of atoms. Books and other materials check in but almost never check out. But a wonderful largesse inspires this library: to read a book, one need not possess a special Library of Congress card, nor be a citizen of the United States. A person needs only to possess the desire to read. Well, actually, the individual has to be in Washington, DC, must be over 18 years old, and the librarians need to be able to find the thing requested. If mis-shelved, it might as well be lost forever. Few people ever use the library because, in reality, almost no one can. The library is almost everything but usable, everything but digital. There are more than 100 million items, and virtually none are available in digital form. Recently, the library stuck its toe out onto the Internet, touching millions through exhibits on the World Wide Web. Indeed, last summer it received its first-ever digital books (never mind that no procedures exist for receiving those bits, and essentially no apparatus exists to deal with them).
http://www.media.mit.edu/~nicholas/Wired/WIRED3-05.html (1 of 3) [28-4-2001 14:10:31]

WIRED 3.05 - A Bill of Writes

As you know, almost every book published in the United States during the last 15 years has been produced digitally. Your next book will be, too, but I bet the atoms will still pile up in the depository - not the bits. This problem has not gone unnoticed. The National Science Foundation, the Advanced Research Projects Agency, and the Library of Congress are fully aware of the challenge to change those atoms into bits. The government has committed more than US$30 million over four years on digital-library research, including new means to convert, index, and navigate the wealth of bits in the global public library of tomorrow. Jefferson would be proud.

Copyright unbound
But Jefferson did not understand bits. He could not imagine that 1s and 0s would represent information and one day be read (and eventually understood) by machines. All of copyright law is essentially a Gutenberg artifact, bound to paper and construed in ignorance of the digital age. It will take us years to build digital libraries and longer to retool copyright law. Intellectual property is an extraordinarily complex subject. We are almost clueless about how to handle digital derivative works and digital fair use. In a digital world, the bits are endlessly copyable, infinitely malleable, and they never go out of print. Millions of people can simultaneously read any digital document - and they can also steal it. So, how do we protect digital information? Our own export laws (a separate issue you may want to consider) stymie encryption shamelessly. The information age is in a bit of a mess when it comes to understanding who may access what, when, how, and under the control of whom. But don't wait. You control the library that manages United States copyrights. Establish a Bill of Writes immediately. Force us to find solutions, so our children and grandchildren can benefit sooner, rather than later, from being digital.

A digital deposit act


Here is the idea. Pass a Bill of Writes - a digital deposit act - requiring that each item submitted to the Library of Congress be accompanied by its digital source. Make it illegal to obtain copyright otherwise. Publishers like Knopf and most authors will be concerned about the protection of their bits. The bill must include a bonded escrow agreement so these bits cannot be released without author and publisher approval. Eventually, the Library of Congress could provide bountiful nourishment for the global infrastructure. Instead of being the "library of last resort," it might become the first place to look. In a richly
http://www.media.mit.edu/~nicholas/Wired/WIRED3-05.html (2 of 3) [28-4-2001 14:10:31]

WIRED 3.05 - A Bill of Writes

woven infrastructure, the Library of Congress could be transformed from a depository into a "retrievatory." It would be closer to your desk and closer to the living-room couch than any of the thousands of public library buildings. A Library of Progress could be in the pockets of tomorrow's kids. Having a Bill of Writes now means that we can spend the next 20 to 50 years hammering out new digital-property laws and international agreements without stunting our future. More importantly, it means that publishers and authors can elect to make their bits available after they decide they have earned enough, and the bits will be ready to go. Without a Bill of Writes, our grandchildren will spend a lot of time digitizing the 70 million items that will be saved by your library over the next 30 years. The British and the French are building gigantic new buildings to hold more shelves for future atoms. Let our country be the first to write being digital into law. Sincerely, Your friends at the MIT Media Lab This column was co-authored with Professor Michael Hawley (mike@media.mit.edu), who holds appointments at MIT in Electrical Engineering and Computer Science, and Media Arts and Sciences. Next Issue: Digital Videodiscs, Either Format, Are the Wrong Product [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.05 May 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-05.html (3 of 3) [28-4-2001 14:10:31]

WIRED 3.06 - Digital Videodiscs: Either Format Is Wrong

NEGROPONTE

Message: 24 Date: 6.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Digital Videodiscs: Either Format Is Wrong

Here we go again. Big guns and big stakes have been pulled into the fracas over two competing digital videodisc formats. Both of them will store studio-quality, full-length movies on a compact disc, but each format will do it in a slightly different way. And people are taking sides. Even big players from Hollywood are jumping in, as if they knew the difference between an angstrom and a micron. These mature and savvy businessfolk, as well as the press, refuse to understand that the issue is not video, but bits. Bits are bits. The moving image is only one data type. Surely, we cannot expect consumers to buy one machine or technology for video, another for audio, another for data, and yet another for multimedia. The screwball idea of owning a digital videodisc, which is nothing more than a movie player, is tantamount to digital obscenity. Certainly, we must increase the number of bits per square millimeter on CDs, but we also need to treat those bits as nothing more or less than what they are. We do not need to agree in advance on precise digital standards and formats, and we cannot speculate in advance on all conceivable uses. Instead, we need to agree on "meta-standards," a way of talking about talking about those bits. Sound like double talk? It isn't. Listen. Today, you can store roughly 5 billion bits on one side of a CD. If someone provided the means of increasing that by a factor of 10, it would be absolutely terrific. But I hope that whoever comes up with that scheme makes those bits as flexible as possible. They may be used for video and they may not be. Don't use the typical "standards committee" mind-set to remove the potentially rich new forms of information and entertainment that haven't even been thought of yet.

Unstandard standards
The physical world is unforgiving, so standards are desperately needed. Nonetheless, we cannot agree which side of the road to drive on. Europe has 20 power plugs. And once standards are set in the world of atoms, they're nearly impossible to undo.

http://www.media.mit.edu/~nicholas/Wired/WIRED3-06.html (1 of 3) [28-4-2001 14:10:33]

WIRED 3.06 - Digital Videodiscs: Either Format Is Wrong

But the world of bits is different, more forgiving. Why can't the entertainment industry understand this? A string of bits can contain information about itself: what it is, how to decode it, where to get related data. Surely there are multiple applications and options for future digital formats. The world is not just about movies, movies, and more movies. We must not lock the format of the bits into a single standard and call it video. By contrast, what we must get from the outset is the atoms: the diameter of the disc (the only variable that's not in dispute), the physical property of the small pits in the disc off which the laser bounces (as much as anything, an issue of choosing a wavelength of light that everyone can agree upon), and the thickness of the disc. If we don't agree on these, we are in very deep trouble. Although nobody is saying it, this is what the debate is really about, not video.

Beyond paperback movies


We must not forget the obvious: discs are round. Unlike a long tape, the geometry of CDs is intrinsically random-access, and thus interactive. We need to pay serious attention to what the physical standard will do for interactive applications versus just looking at a movie from beginning to end. For example: How does the disc rotate? Is it like an audio CD, changing speed and slowing its angular velocity as the head moves to the outer tracks? Does it rotate at a constant angular velocity like your magnetic hard disc? Or should it be variable and pump out more bits the faster it spins, up to some practical limit? The biggest issue concerns single- or double-sided discs. Curiously, the debate is largely rhetorical, as proponents of two-sidedness do not propose to read both sides simultaneously with a two-headed player. One very appealing feature of reading the top and bottom of the disc at the same time is that doing so allows the sort of seamless multimedia applications in which one head "plays" while the other "seeks." This feature is not even part of the current debate.

CD's last stand


It's crucial to get this new CD right and not lock it into old-fashioned thinking. Why? Because it will probably be the last CD format we will ever see. Package media of all kinds are slowly dying out, for two important reasons. First, we are approaching costless bandwidth. Shipping all those CD atoms will be too difficult and expensive in comparison with delivering the bits electronically. Think of the Net as a giant CD with limitless capacity. Economic models may justify CDs for music and children's stories, making them last longer because they are played over and over again. But for the most part, CDs and all sorts of other package media (like books and magazines) will wither during the next millennium. Second, solid-state memory will catch up to the capacity of CDs. Today, it seems outrageous to store a feature-length film in computer memory, but it won't be outrageous tomorrow. Solidstate memory offers the important feature of no moving parts.
http://www.media.mit.edu/~nicholas/Wired/WIRED3-06.html (2 of 3) [28-4-2001 14:10:33]

WIRED 3.06 - Digital Videodiscs: Either Format Is Wrong

In fact, 100 years from now, people will find it odd that their ancestors used any moving parts to store bits! So please, Sony, Philips, Toshiba, Matsushita, and all your partners in Hollywood, don't give us a digital videodisc. Give us a new medium to store as many bits as possible. Learn from CDROM and let the market invent the new applications and new entertainment customers want. We'll be much better off. Next Issue: Affordable Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.06 June 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-06.html (3 of 3) [28-4-2001 14:10:33]

WIRED 3.07 - Affordable Computing

NEGROPONTE

Message: 25 Date: 7.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Affordable Computing

Andy makes my computer faster. Bill uses more of it. Andy makes my computer yet faster and Bill uses yet more of it. Andy makes more; Bill uses more. What do you and I get when Intel and Microsoft keep adding and taking? Almost nothing. My computer takes forever to start up. Loading my word processor is interminable. Each new release of an application is festooned with gratuitous options and an army of tiny icons the meanings of which I no longer remember. We've all heard that an application program exposes only the tip of its iceberg. Well, I normally use only the tip! My old Mac 512K went "boing" and was on. I still run Microsoft Word 4.0 (and would run Word 2.0 if I could) because it's fast and simple. The last six years of advances in personal computing have resulted in diminishing returns when one considers the performance we see. According to Moore's law (Gordon Moore is Andy Grove's partner and mentor and the cofounder of Intel), since I installed Word 4.0 six years ago, my system should now be running 16 times faster - not slower. My lament is, Why can't my system run at the same apparent speed it did six years ago and cost 16 times less? What's going on? Simple. First, the cost of personal computers is held artificially at US$1,500 (give or take $500), because the current market can bear this amount, and it provides a suitable profit margins for manufacturers. Second, software is growing far too complex (featuritis), so clean, simple systems are almost extinct. Third, it has been historically difficult for American technology companies to be in the commodity business and sell 10 million computers at $150 instead of 1 million at $1,500. Software and hardware companies can get away with the $1,500 price tag when their primary customers are other businesses. But now that the home is the fastest-growing market, they've got to think about you as the customer - and this is an entirely different game. Don't let anyone tell you that a $1,500 price tag is endemic because computers are just plain expensive. Do you need proof that they can be cheap(er)?

http://www.media.mit.edu/~nicholas/Wired/WIRED3-07.html (1 of 3) [28-4-2001 14:10:35]

WIRED 3.07 - Affordable Computing

Nintendo has released a 20-MHz, 32-bit RISC machine called the Virtual Boy (what an awful name) that includes extraordinary 3-D graphics and stereo sound; two built-in displays with four levels of gray; and a novel, two-hand game controller. Its retail price is $199, and it comes with one game cartridge. This product arrives at a time when the yen is below 85 to the dollar. Nintendo is not losing money on the razor to sell the blades. Why not take that kind of power and build it into a more general-purpose - but stripped-down machine, with Netscape or Mosaic built in, that everyone can afford? Congress worries about the information-rich versus the information-poor, but most of its members probably don't realize that computers can cost less than bicycles.

Running Moore's law in reverse


In 1976, I had a CRT terminal, the Fox, made by Perkin Elmer (formerly in the business). The Fox had a small footprint, compact display, and minimal but adequate keyboard. It cost $200. According to Moore's law, the achievable complexity of an integrated circuit doubles every 18 months. If I correlate complexity with dollars, $200 of 1976 Fox should be worth about $1 million of computer today. So, for one five-thousandth of that amount, why can't I get what I had in the Fox, plus color and local computing? The answer is, I can. Manufacturers just need to be pushed into this commodity business so that every school and low-income household can own a computer. This can be achieved without subsidies by trimming the fat from today's PCs and making some bare-bones engines that word process, telecommunicate, and provide access to online services. Use a modest color display: a 13-inch window into the Net is better than no window at all. People are always amazed by the amount of daylight let into a dark room by a small hole (witness the Pantheon in Rome). It's the same on the Net. Advertiser-supported computing But even $200 may not be low enough. Or, manufacturers may say, Fine, but give me an order for 10 million, then I'll build it for you. OK. Here's an idea for how such a notion can be achieved, getting both the order for 10 million and reducing the price to zero (or lower). Listen carefully, AOL. Today, there are more than 100 million computer screens in the United States. Think of every screen as a potential billboard. Let's assume that each one is turned on once a day and, lo and behold, each day a new advertising message appears - the screen saver for the day. That message could be targeted to specific users. No sense showing me an ad for Bourbon if I don't drink hard liquor or a commercial for Geritol if I'm 6 years old. Manufacturer X (the one taking the order for 10 million machines from AOL) would probably
http://www.media.mit.edu/~nicholas/Wired/WIRED3-07.html (2 of 3) [28-4-2001 14:10:35]

WIRED 3.07 - Affordable Computing

have to build a small RF receiver so it could load these machines with personalized commercials but without requiring the user to log in or pay for connection time. This could be done, for example, through terrestrial broadcasts using the likes of Mobile Telecommunications's SkyTel. There are several ways to implement this and more ways to make the business model attractive to vendors (and the computer more or less free). Advertisers would pay to gain access to what turns out to be about 2,000 acres of advertising space (changeable per square inch, per day, or per hour). That money could subsidize the cost of the computer and even pay for you to use it. Will this really work? Yes. Of course, many details need to be resolved. My point is not this specific example, but the need for some creative thinking about how to make and price PCs. I am no great fan of advertising, but it does represent a quarter-of-a-trillion-dollar industry, and there must be a way to use its size to make computing affordable to all Americans. So, step one is to get Andy and Bill to stop scratching each other's backs, and step two is to find new business models to make low-cost PCs available to consumers through the intelligent use of advertising. Come on, boys. Next Issue: PC Outboxes TV [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.07 July 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-07.html (3 of 3) [28-4-2001 14:10:35]

WIRED 3.08 - Bit by Bit, PCs Are Becoming TVs. Or Is It the Other Way Around?

NEGROPONTE

Message: 26 Bit by Bit, PCs Are Date: 8.1.95 Becoming TVs. Or From: <nicholas@media.mit.edu> Is It the Other To: <lr@wired.com> Way Around? Subject:
Only a year ago, people argued over which one would serve as the port of entry for the I-way, and which would be the information and entertainment appliance for the home. Well, the argument is over. The answer is the PC. George Gilder is right. There is life after television, and it's all about the PC. But don't confuse television with television sets. I'm not suggesting that relaxation is a thing of the past. To understand the rise of the PC and the demise of the TV set is to consider the role of the TV in the everyday life of Americans - as well as the degree to which that role can be played out more fully by other means in the digital age. Is It Location, Location, Location? Although Vincent Astor talked about real estate in a larger sense, consider the microlandscape of your own home. Whether in a living room, a library, or a bedroom, a television set normally includes a large screen. The viewer sits beyond arm's reach, often with others, on a sofa. A PC, on the other hand, frequently has a smaller screen and is rarely located in the living room. The user sits upright in a chair, at a desk or table, with his or her nose roughly 18 inches from the screen. These particular customs stem not only from today's versions of these appliances but also from the fact that human interaction feels more meaningful when we are next to each other, not tethered by electrons. PCs, however, will inevitably become more bedworthy. And television sets will grow to resemble keyboardless computers, installed more like Sheetrock than furniture. The difference is not really social. Some still consider the experience of humans watching TV side by side to be more social than the interaction of the 10 million Americans online today. Yet we know that Americans engage more in "community" than "information retrieval" while online. (On America Online, according to its CEO Steve Case, the percentage ratio is 60:40.)

Where the Bits Are


The basic difference between today's TVs and PCs has nothing to do with location, social habits, or our need to relax. It has to do with how the bits arrive. The TV takes in bits radiated by cable, satellite, or terrestrial transmission. These bits are essentially thrown at the TV to
http://www.media.mit.edu/~nicholas/Wired/WIRED3-08.html (1 of 3) [28-4-2001 14:10:37]

WIRED 3.08 - Bit by Bit, PCs Are Becoming TVs. Or Is It the Other Way Around?

catch-as-catch-can. By contrast, the PC receives its bits because it (or you) asks for them explicitly (or implicitly). That's the difference. In both cases, the TV and the PC are bit processors, accumulating bits as they come, or reaching for them from afar. Sometimes, you'll want to pull on bits; other times, you'll want them pushed at you - whether you're in the bedroom or the living room, sitting or lying, with someone or alone. For a while, computer designers were adding more and more video to their computers; meanwhile, TV manufacturers were adding more and more computing to their TVs. Modern TVs have chips running megaMIPs, and Intel processes VCR-quality TV (real-time, full-screen TV) on its current Pentium. Yet companies that made both TVs and PCs found that the respective divisions didn't even talk to each other: one group was addressing the "consumer" market, the other the "computer-user" market. Any knucklehead who believes such a distinction exists today doesn't deserve gainful employment. They are the same market.

Media Arts and Sciences


During the past two springs, I co-taught the Media Lab's introductory undergraduate subject, MAS100. This year, we asked students to do an assignment revealing information about PCs and TVs. The following are facts uncovered by four of those students. Student One: Aneel N. Nazareth (achmed@mit.edu). Fact: http://voyager.paramount.com/ is where you'll find the first Voyager Web page, which predates the TV show. Student Two. Derek Lindner (buddha@mit.edu). Facts: The number of TV schedules on the Net: 27. The number of TV shows with a Web home page: 540. The number of TV networks in Peru: 8. The number of Internet subnets in Peru: 37. The number of TV networks in Lebanon: 44. The number of Internet networks in Lebanon: 1. Student Three: Annie Valva (valvaan@hugse1.harvard.edu). Facts: In 1994, US consumers spent about the same amount on PCs as on TVs (US$8.07 billion on PCs; $8.4 billion on TVs). Following the usage habits of 1,200 new PC users, researchers (from AST) found 13 hours a week were spent on the PC to 9 hours watching TV. Student Four: Brian Tivol (tivol@mit.edu). Facts: According to the 1994 Microsoft Annual Report, "This year, the installed base of Windows doubled to more than 60 million." The final episode of M*A*S*H (the show with the largest Nielsen share), aired in only 50,150,000 homes but still beat out the Super Bowls!

But Money Talks


Change will be fast, but not overnight - we are almost clueless about economic models, other
http://www.media.mit.edu/~nicholas/Wired/WIRED3-08.html (2 of 3) [28-4-2001 14:10:37]

WIRED 3.08 - Bit by Bit, PCs Are Becoming TVs. Or Is It the Other Way Around?

than those currently in use. So far, this process of pushing bits at people has been in real time only. When people talk about 500-channel TV, they mean 500 parallel streams. They don't mean one program after another, broadcast in one five-hundredth of real time. You don't download TV, you join an ongoing program. That's why commercial TV stations and cable operators are delivering as many eyeballs as possible to advertisers - so they can afford to bring the programs to the people in the first place. When you buy a can of Coke, you are paying a few cents for the drink and the can, and nanodollars for television advertising. No doubt, the means of financing the bits will look strange to our great-great grandchildren. But for today, it's what makes television work. Eventually, we'll find new economic models, probably based on advertising and transactions. Television will become more and more digital, no matter what. These are givens. So it makes no sense to think of the TV and the PC as anything but one and the same. It's time TV manufacturers invested in the future, not the past - by making PCs, not TVs. Next Issue: PC Outboxes TV [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.08 August 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-08.html (3 of 3) [28-4-2001 14:10:37]

WIRED 3.09 - Get a Life?

NEGROPONTE

Message: 27 Date: 9.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Get a Life?

Any significant social phenomenon creates a backlash. The Net is no exception. It is odd, however, that the loudest complaints are shouts of "Get a life!" - suggesting that online living will dehumanize us, insulate us, and create a world of people who won't smell flowers, watch sunsets, or engage in face-to-face experiences. Out of this backlash comes a warning to parents that their children will "cocoon" and metamorphose into social invalids. Experience tells us the opposite. So far, evidence gathered by those using the Net as a teaching tool indicates that kids who go online gain social skills rather than lose them. Since the distance between Athens, Georgia, and Athens, Greece, is just a mouse click away, children attain a new kind of worldliness. Young people on the Net today will inevitably experience some of the sophistication of Europe. In earlier days, only children from elite families could afford to interact with European culture during their summer vacations abroad. I know that visiting Web pages in Italy or interacting with Italians via e-mail isn't the same as ducking the pigeons or listening to music in Piazza San Marco - but it sure beats never going there at all. Take all the books in the world, and they won't offer the real-time global experience a kid can get on the Net: here a child becomes the driver of the intellectual vehicle, not the passenger. Mitch Resnick of the MIT Media Lab recently told me of an autistic boy who has great difficulty interacting with people, often giving inappropriate visual cues (like strange facial expressions) and so forth. But this child has thrived on the Net. When he types, he gains control and becomes articulate. He's an active participant in chat rooms and newsgroups. He has developed strong online friendships, which have given him greater confidence in face-to-face situations. It's an extreme case, but isn't it odd how parents grieve if their child spends six hours a day on the Net but delight if those same hours are spent reading books? With the exception of sleep, doing anything six hours a day, every day, is not good for a child.

http://www.media.mit.edu/~nicholas/Wired/WIRED3-09.html (1 of 3) [28-4-2001 14:10:40]

WIRED 3.09 - Get a Life?

Anyware
Adults on the Net enjoy even greater opportunity, as more people discover they can work from almost anywhere. Granted, if you make pizzas you need to be close to the dough; if you're a surgeon you must be close to your patients (at least for the next two decades). But if your trade involves bits (not atoms), you probably don't need to be anywhere specific - at least most of the time. In fact, it might be beneficial all-around if you were in the Caribbean or Mediterranean then your company wouldn't have to tie up capital in expensive downtown real estate. Certain early users of the Net (bless them!) are now whining about its vulgarization, warning people of its hazards as if it were a cigarette. If only these whiners were more honest, they'd admit that it was they who didn't have much of a life and found solace on the Net, they who woke up one day with midlife crises and discovered there was more to living than what was waiting in their e-mail boxes. So, what took you guys so long? Of course there's more to life than e-mail, but don't project your empty existence onto others and suggest "being digital" is a form of virtual leprosy for which total abstinence is the only immunization. My own lifestyle is totally enhanced by being online. I've been a compulsive e-mail user for more than 25 years; more often than not, it's allowed me to spend more time in scenic places with interesting people. Which would you prefer: two weeks' vacation totally offline or four to six weeks online? This doesn't work for all professions, but it is a growing trend among so-called "knowledge workers." Once, only the likes of Rupert Murdoch or Aga Khan could cut deals from their satellite-laden luxury yachts off the coast of Sardinia. Now all sorts of people from Tahoe to Telluride can work from the back seat of a Winnebago if they wish.

B-rated meetings
I don't know the statistics, but I'm willing to guess that the executives of corporate America spend 70 to 80 percent of their time in meetings. I do know that most of those meetings, often a canonical one hour long, are 70 to 80 percent posturing and leveling (bringing the others up to speed on a common subject). The posturing is gratuitous, and the leveling is better done elsewhere - online, for example. This alone would enhance US productivity far more than any trade agreement. I am constantly astonished by just how offline corporate America is. Wouldn't you expect executives at computer and communications companies to be active online? Even household names of the high-tech industry are offline human beings, sometimes more so than execs in extremely low-tech fields. I guess this is a corollary to the shoemaker's children having no shoes. Being online not only makes the inevitable face-to-face meetings so much easier - it allows you to look outward. Generally, large companies are so inwardly directed that staff memorandums
http://www.media.mit.edu/~nicholas/Wired/WIRED3-09.html (2 of 3) [28-4-2001 14:10:40]

WIRED 3.09 - Get a Life?

about growing bureaucracy get more attention than the dwindling competitive advantage of being big in the first place. David, who has a life, needn't use a slingshot. Goliath, who doesn't, is too busy reading office memos.

Luddites' paradise
In the mid-1700s, mechanical looms and other machines forced cottage industries out of business. Many people lost the opportunity to be their own bosses and to enjoy the profits of hard work. I'm sure I would have been a Luddite under those conditions. But the current sweep of digital living is doing exactly the opposite. Parents of young children find exciting self-employment from home. The "virtual corporation" is an opportunity for tiny companies (with employees spread across the world) to work together in a global market and set up base wherever they choose. If you don't like centralist thinking, big companies, or job automation, what better place to go than the Net? Work for yourself and get a life. Next Issue: Year 2020, the Fiber-Coax Legacy [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.09 September 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-09.html (3 of 3) [28-4-2001 14:10:40]

WIRED 3.10 - 2020: The Fiber-Coax Legacy

NEGROPONTE

Message: 28 Date: 10.1.95 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

2020: The FiberCoax Legacy

In 2020, people will look back and be mighty annoyed by our profligate insistence on wiring a fiber-coax hybrid to the home rather than swallowing the cost of an all-fiber solution. They'll ask, "Why didn't our parents and grandparents plan more effectively for the future?" As far as the American home is concerned, the phone companies have the right architecture (switched services), and the cable companies have the right bandwidth (broadband services). We need the union of these: switched broadband services. But how do we get from here to there? No one will deny that the long-term solution is to install fiber all the way, but the benefits seem diffuse and the costs acute. In the eyes of the telcos and cable companies, the question is financial - and since the near-term balance sheets don't add up, fiber is not being laid all the way. One way around this problem is to circumvent the private market and let a telecommunications monopoly build the infrastructure, which is exactly what Telecom Italia is doing. It has declared fiber to the home as its goal and will swallow the initial cost meeting this goal in the name of national interest. This is one of the few benefits of a government-owned monopoly: Italy will have a far better multimedia telecommunications system than the United States by 2000.

The incremental approach


A private, profit-driven company cannot always do the right thing, because the payback may be too distant and uncertain. So, rather than rush headlong into an all-fiber future, US corporations have decided to mix fiber-optic and coaxial cable. With the so-called hybrid fiber coax, fiber trunk lines are brought to a point near a group of 500 to 2,000 homes and then patched into copper coax that is shared by the entire group. Both the cable companies and the phone companies have come to this same interim solution, more or less unanimously, but from different perspectives. Why? Running coax is US$400 per household cheaper than bringing fiber all the way. Part of that expense is coping with and switching all that fiber. Part is the differently skilled labor
http://www.media.mit.edu/~nicholas/Wired/WIRED3-10.html (1 of 3) [28-4-2001 14:10:42]

WIRED 3.10 - 2020: The Fiber-Coax Legacy

needed. Part is the fact that TVs will need an adapter. But none of that $400, mind you, is the cost of the fiber, which, these days, is more reliable and cheaper than copper, even including the connectors. So, the cost difference today between hybrid and pure fiber is $400 per household. That estimate was $1,000 two years ago and will probably be $200 in a year or two. If we base our decision not to run fiber on a number that is dropping so rapidly, have we really made the right choice? If what stands between me and fiber to my home is $400, I'll raise my hand and pay my share. I bet others would too. Maybe, in staring so hard at the bottom line, we are failing to remember what's really going on here.

Catering to the couch


Face it, the argument for a fiber-coax installation is an argument for incrementalism. Friends of mine have likened the hybrid solution to cellular telephony. Once a cell gets overloaded, the argument goes, it can be broken into subcells. By extension, when the fiber-coax solution does not meet demands, the fiber beachhead can be moved forward to serve, say, 100 instead of 1,000 homes. Well, that almost works. But the hybrid solution makes two enormous assumptions about how people will use networks. One is that a home will be happy sharing a gigabit per second with as many as 2,000 neighbors. The other is that all homes will consume more bits than they generate. Both assumptions are flawed. Bandwidth is a complicated concept because it combines instantaneous bursts and continuous throughput. As soon as you string together homes like Christmas tree lights, you then assume that each home will receive enough fast bursts to make the overall bandwidth look and feel fast. But it won't take too long for a few simultaneous users to gobble up a gigabit per second of bandwidth, especially as audio and video become commonplace on the Net. The second and bigger problem is symmetry. This topic is hotly contested by cable and telephone companies, who don't believe consumers want to send out as many bits as they take in. Cable companies allocate the coax spectrum with copious bandwidth flowing into the home and precious little back to the head end. But that logic fails when you reconsider the notion of a head end. Where is the head end in a true, switched broadband system? The Net has shown itself to be a heterogeneous collection of nodes, each of which can be a source or a sink, a transmitter or a receiver. As more and more people start entrepreneurial services from their home PCs, we will need symmetrical systems, designed without a "headend prejudice." The assumption that the average American is a couch potato involved in nothing but consuming advertiser-supported bits is wrong and, frankly, insulting.

Joe six-packets
http://www.media.mit.edu/~nicholas/Wired/WIRED3-10.html (2 of 3) [28-4-2001 14:10:42]

WIRED 3.10 - 2020: The Fiber-Coax Legacy

The use of bandwidth is generational. As soon as kids find the Net alternative, they spend less time watching TV. The number of Web sites is doubling every 53 days. These will increase, not decrease, and provide the basis for a huge nano-economy when we crack the nut of e-cash. Andy Lippman, associate director of the Media Lab, has a nice way of putting it. When people take him to task about the real need for symmetry in future communications systems, he notes that it's already built into our current ones, reserved for the head ends, not the customers. But more and more people will want to be their own head ends. Our wiring and our consumption of new media are deeply interwoven. What we see in the current fiber-coax strategies is fiscal timidity, justified by the usage patterns of an old-line broadcast model, not the Net. There is a way to do it right, and that is to provide fiber all the way to the home. Instead of wasting time justifying half-baked ideas, let's find ways to finance the solution. The Italians already have. Next Issue: Being 10 [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.10 October 1995.]

http://www.media.mit.edu/~nicholas/Wired/WIRED3-10.html (3 of 3) [28-4-2001 14:10:42]

WIRED 2.09 - Why Europe is So Unwired

NEGROPONTE

Message: 15 Date: 9.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Why Europe is So Unwired

Do you realize that in France the first six letters of a keyboard don't spell QWERTY but AZERTY? In March of this year, when French Culture Minister Jacques Toubon announced the decision to rid the French language of foreign (read: English) words by making it illegal (a US$3,500 fine) to use such words in company names and slogans, I was sadly reminded of a 1972 job I conducted for the Shah of Iran. My task was to provide a color word processor - the Shah wished to see Farsi texts in which color depicted the age of a word. His desire was to understand his language rather than purge it. I suppose, by contrast, Minister "James Allgood" plans to change all stop signs to "Arret." Given this backdrop of nonsense at the highest level of government, is it much of a surprise that Europe is such a weak player in the computer and telecommunications industry? Of all fields, this industry is truly global and borderless. And as with air-traffic control, English is the lingua franca. Bits don't wait in customs; they flow freely across borders. Just try stopping them. WIRED's first World Wide Web page, for example, was developed in Singapore - a place whose support for freedom of the press is dubious, a place William Gibson referred to as "Disneyland with the Death Penalty" (WIRED 1.4, page 51). Many artistic, industrial, and intellectual movements are driven by distinctly national and ethnic forces. The digital revolution is not one of them. Its ethos is generational and young. The demographics of computing are much closer to rock music than theater. French rock star Johnny Halliday is allowed to sing in English, after all. If Europe wishes to remain at the vanguard of culture, it must step off its high horse and look more imaginatively at the future. Maybe it is time to discontinue ministries of culture.

Being Wise Not Smart


Jacques Attali - special advisor for the last 12 years, since he was 38, to the president of the French Republic - whom Mitterand referred to as his "personal computer," has written 17 books on everything from Europe to the history of time. So why didn't such a smart interface agent
http://www.media.mit.edu/~nicholas/Wired/WIRED2-09.html (1 of 3) [28-4-2001 14:10:44]

WIRED 2.09 - Why Europe is So Unwired

move into the digital generation? Because like most places in Europe, France is a top-down society, where a job is a place one occupies and protects. It is not a process of building, creating, and dreaming. Incentives for young entrepreneurs are almost nonexistent. Compared to their US counterparts, French young people are just not taken seriously. Double-breasted wisdom reduces risk. A generally aging population enjoys stability and places confidence most easily in those who have had considerable and tested experience. Ballet dancers, downhill skiers, and mathematicians may peak at thirtysomething; CEOs and national leaders, by contrast, are groomed by the passage of time. The word "leader" presumes age, despite Alexander the Great, who at his death was six years younger than Bill Gates is today. I happened to be in Paris in May 1968, when students my age took to the streets. I asked myself, Why are we, in the United States, so complacent and docile? Fourteen years later, I found myself working directly for the Elysee Palace. And, guess what? Many of the people orbiting Mitterand were the same people who had hurled paving stones through the tear gas in 1968.

Venture Void
When people ask me why so many new ideas in my field come from the United States, I talk about the respect we give to young people and to our heterogeneous culture. The real difference is our venture capital system, which is almost totally absent in Japan and Europe where accountants intermix venture money with large leveraged buyouts. Therefore, the statistics do not show the real difference between them and the United States, where venture capital firms spent US$3.07 billion in 1993. The result is many fewer young European and Japanese companies that combine the genius of the hacker with the drive of the entrepreneur. This is particularly important when the entry cost is nontrivial and distribution determines the difference between success and failure. New ideas are not just about capital. They are also about risk and the willingness to take it. The flip side of venture capital is the risk young people are frequently willing to take with something even bigger. I have seen marriages fail, people work themselves to death (literally), and an obsession for success that overshadows every other human dimension. Good or bad, such obsessive commitment is a key part of many new ventures. The currency of achievement is often not money but personal fulfillment and passion, something too easily thwarted by the bureaucracies of a homogeneous, old society.

The Nail That Sticks Up Highest


I was once asked by a former Japanese minister of education what I would do if I could do just one thing to improve the grammar-school system of that country. My reply: "Abolish uniforms." While Europe has less obvious uniforms, educational freedom is still limited. Only England respects and even cultivates idiosyncrasy. The result of this lack of educational freedom is less
http://www.media.mit.edu/~nicholas/Wired/WIRED2-09.html (2 of 3) [28-4-2001 14:10:44]

WIRED 2.09 - Why Europe is So Unwired

playfulness and an infrequent convergence of intellectual cultures, which is where computer ideas have traditionally come from. One of MIT's most significant computer forces during the early '60s came from its model railroad club. Another came from the Science Fiction Society. Multimedia has disparate roots in storytelling, drama, music, and cinematography. The point is that new ideas do not necessarily live within the borders of existing intellectual domains. In fact, they are most often at the edges and in curious intersections. This means that institutions like universities and PTTs have to embrace some very anti-establishment ideas. Europe's dominantly state-run universities and PTTs just don't do that very well. They run a close first and second for knocking down new ideas. The European Union is now faced with a global information infrastructure in which it just may not be a playeur. Next Issue: Human Interface: Sensor Deprived [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.09 September 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-09.html (3 of 3) [28-4-2001 14:10:44]

WIRED 2.08 - Prime Time Is My Time: The Blockbuster Myth

NEGROPONTE

Message: 14 Date: 8.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Prime Time Is My Time: The Blockbuster Myth

Most equipment and network providers believe that entertainment will finance the superhighway and that video-on-demand, VOD, is the driving force or killer app of our wired future. I do not disagree with this view, but I marvel at the short-sighted, incomplete, and outright misleading conclusion drawn from it. The case for VOD goes as follows: Let's say a videocassette-rental store has a selection of 2,000 tapes. Suppose it finds that 5 percent of those tapes result in 90 percent of all rentals. Most likely, a good portion of that 5 percent would be new releases and would represent an even larger proportion of the store's rentals if the available number of copies were larger. Videocassette-rental stores will go out of business within a decade. (It makes no sense to ship atoms when you can ship bits.) The easy conclusion is that the way to build an electronic Blockbuster is to offer only those top 5 percent, those primarily new releases. Not only would this be convenient, it would provide tangible and convincing evidence for what some still consider an experiment. It would take too much time and money to digitize all 29,000 movies made in America by 1990. It would take even more time to digitize the 30,000 TV programs stored in the Museum of Television & Radio in New York, and I'm not even considering the movies made in Europe, the tens of thousands from India, or the 12,000 hours per year of soaps made in Mexico by Televisa. The question remains: Do most of us really want to see just that top 5 percent? Or, is this herd phenomenon driven by the old technologies of distribution?

AAATV
Some of the world's senior cellular telephone executives recite this jingle: "anything, anywhere, anytime." These three A's are a sign of being modern and being wired (and wireless, actually). When I hear this mantra I try not to choke, because my goal is to have "nothing, nowhere, never" unless it is timely, important, amusing, relevant, or capable of engaging my imagination. AAA stinks as a paradigm for human communication -- agents are much better. But AAA is a beautiful way to think about TV.

http://www.media.mit.edu/~nicholas/Wired/WIRED2-08.html (1 of 3) [28-4-2001 14:10:46]

WIRED 2.08 - Prime Time Is My Time: The Blockbuster Myth

We hear a great deal of talk about 1,000 channels of TV. Allow me to point out that, even without satellite, more than 1,000 programs are delivered to your home each day! Admittedly, they are sent at all -- and odd -- hours. The 150-plus channels of TV listed in Satellite TV Week add another 2,700 or more programs available per day. If your TV could store every program transmitted, you would already have five times the selectivity offered in the superhighway's broad-brush style of thinking. But, instead of keeping them all, have your agent-TV grab the one or two in which you might have interest, for you to see anywhere and anytime. Let AAATV expand to a global infrastructure: the quantitative and qualitative changes become interesting. Some people might listen to French television to perfect their French, others might follow Swiss Cable's Channel 11 to see unedited German nudity (at 5 p.m. New York time), and the 2 million Greek Americans might catch any one of the three national or seven regional channels of Greece. The British devote 75 hours per year to the coverage of chess championships and the French commit 80 hours of broadcasting to the Tour de France. Surely American chess and bicycle enthusiasts would enjoy access to these events -- anytime, anywhere. My point is simple: the broadcast model is what is failing. "On-demand" is a much bigger concept than not-walking-out-in-the-rain or not-forgetting-a-rented-cassette-under-the-sofa-fora-month. It's consumer pull versus media push, my time -- the receiver's time -- versus the transmitter's time.

Rethreaded TV
Beyond recalling an existing movie or playing any of today's (or yesterday's) TV around the world (roughly 15,000 concurrent channels), VOD could provide a new life for documentary films, even the dreaded "infomercial." The hairs of documentary filmmakers will stand on end when they hear this. But it is possible to have TV agents edit movies on the fly, much like a professor assembling an anthology using chapters from different books. If I were contemplating a visit to the southern coast of Turkey, I might not find a documentary on Bodrum, but I could find sections from movies about wooden-ship building, nighttime fishing, underwater antiquities, and Oriental carpets. These all could be woven together to suit my purpose. The result would not be an "A+" in Introductory Filmmaking. But one doesn't expect an anthology to be Shakespeare. In fact, one judges production values through the eyes of the beholder. It would help to thread chunks made by great organizations such as National Geographic, PBS, or BBC, but the result would have meaning only to me.

Cottage Television
Finally, the 3.1 million camcorders sold in the US last year cannot be ignored. If the broadcast model is colliding with the Internet model, as I firmly believe it is, then each person can be an unlicensed TV station. Yes, Mr. Vice President, this is what you said in LA. Even before we understand how the Internet will function as a commercial enterprise, we must reckon with
http://www.media.mit.edu/~nicholas/Wired/WIRED2-08.html (2 of 3) [28-4-2001 14:10:46]

WIRED 2.08 - Prime Time Is My Time: The Blockbuster Myth

uncountable hours of video. I am not suggesting we consider every home movie to be a prime-time experience. What I am saying is that we can nowthink of TV as a great deal more than high-production-value mass media when the content strikes home, so to speak. Most telecommunications executives understand the need for broadband into the home. (Recall, broadband, for me, is 1.5 to 6 Mbits per household member, not Gbits). What they cannot fathom is the need for a back channel of similar capacity. The video back channel is already accepted in teleconferencing and is a particularly fashionable medium in divorced families for the parent who does not have custody of the children. That's live video. Consider "dead" video. In the near future, individuals will be able to run video servers in the same way that 57,000 Americans run computer bulletin boards today. That's a television landscape of the future which looks like the Internet. Point to multipoint may swing dramatically toward multipoint to multipoint, on my time. Next Issue: Why Europe is Unwired (Part One) [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.08 August 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-08.html (3 of 3) [28-4-2001 14:10:46]

WIRED 2.07 - Learning by Doing: Don't Dissect the Frog, Build It

NEGROPONTE

Message: 13 Date: 7.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Learning by Doing: Don't Dissect the Frog, Build It

I'm always amazed when I read about how badly young Americans are educated, not because such statements are necessarily untrue, but because most authors and critics go on to compare our children with those of France, Korea, or Japan, whose brains have been stuffed with thousands of facts. Most American children do not know the difference between the Baltics and the Balkans, or who the Visigoths were, or when Louis XIV lived. So what? I'll bet you don't know that Reno is west of Los Angeles. Let me point out the heavy price paid in those countries for requiring young minds to master this apparent font of knowledge. Children in Japan are more or less dead on arrival when they enter the university system. Over the next four years they'll feel like marathon runners asked to go rock climbing at the finish line. Worse, those young people didn't learn a thing about learning and, for the most part, have had the love of it whipped out of them. In the 1960s, most pioneers in computers and education advocated a crummy drill-and-practice approach, using computers on a one-on-one basis, in a self-paced fashion, to teach those same God-awful facts more effectively. Now, with multimedia, we are faced with a number of closet drill-and-practice believers, who think they can colonize the pizazz of a Sega game to squirt a bit more information into the thick heads of children.

Don't Dissect a Frog, Build One


On April 11, 1970, Seymour Papert held a symposium at MIT called "Teaching Children Thinking" and placed a new stake in the groundwork of epistemology. His notion was based on using computers as engines which children would teach and thus learn by teaching. He moved the locus of interest from how computers can teach to how children learn. This astonishingly simple idea simmered for almost fifteen years before it came to life through PCs. Today, when almost 30 percent of all American homes contain a personal computer, the idea really has come into its time. Certainly, some learning derives from great teaching and telling a good story. We all remember our good teachers. But a major measure of learning results from exploration, from re-inventing
http://www.media.mit.edu/~nicholas/Wired/WIRED2-07.html (1 of 3) [28-4-2001 14:10:49]

WIRED 2.07 - Learning by Doing: Don't Dissect the Frog, Build It

the wheel and finding out for yourself. Until the computer, the tools and toys for these experiences were limited, special-purpose apparatuses, frequently administered with extreme control and regimentation (my excuse for not learning chemistry). The computer changed this radically. All of a sudden, learning by doing has become the standard rather than the exception. Since computer simulation of just about anything is now possible, one need not learn about a frog by dissecting it. Instead, children can be asked to design frogs, to build an animal with froglike behavior, to modify that behavior, to simulate the muscles, to play with the frog.

Street Smarts on the Superhighway


Think back to your best teachers. If you are a teacher, think of your best students. Or, consider your most admired engineer, scientist, or artist -- living or dead. What do they all have in common? Passion. Passion expresses itself in many forms of excitement and curiosity, it is frequently playful, and it is always consuming. Students from the often admired educational systems of France, Korea, and Japan are routinely dispassionate, even more so than American children, for passion is anathema to the elders of education in these countries -- something to be drilled out of all students. Passion is OK in sports, even wanton abandon. But not in intellectual activities. In some countries children are still to be seen, not heard. On the Internet, by contrast, a child's voice knows no boundary. And nobody can surely identify children there. While we can only roughly estimate that 30 million people use the Internet's 2,217,000 host machines (as of January 1994), trying to guess their ages is even more difficult. In spite of its inception as a tool for the august and older academic community, the average age of an Internet user today is 26 (a number derived with considerable care by MIT undergraduates Jonathan Litt and Craig Wisneski). I expect that number to drop to 15 by the year 2000.

Toys to Learn With


Papert is the Lego Professor of Learning Research at MIT. People frequently smile when they hear such an odd marriage of a leading world toy manufacturer with an institution of higher learning and advanced research. This oddity, however, reflects precisely the shared agenda of Lego and the Media Lab: understanding the acquisition of knowledge at very young ages. If the roots of a small plant are damaged, fertilizer will do only so much, no matter the amount or method of application. For this reason, we see elementary school as the critical moment. Funding of Media Lab research from Interlego A/S, the Danish company that owns Lego in the US, has resulted in an important contribution to products in Lego's Dacta division ("LEGO TC Logo" and "Control Lab"), which have been used in elementary and secondary schools by more than one million children. The computer-controllable Lego allows children to endow their physical constructs with behavior. Both anecdotal evidence and careful testing results reveal that this constructivist (as Papert calls it) approach has an extraordinary reach, across a wide
http://www.media.mit.edu/~nicholas/Wired/WIRED2-07.html (2 of 3) [28-4-2001 14:10:49]

WIRED 2.07 - Learning by Doing: Don't Dissect the Frog, Build It

range of cognitive and learning styles. In fact, many children said to be learning disabled flourish here. Perhaps we have been more "teaching disabled" than "learning disabled." Even without a robust theory of why building things helps us learn, why designing frogs may be better than dissecting them, we can rest assured that the constructivist tools will grab an increasing piece of the market for learning technology. This is happening precisely at a time when more and more people are taking the publishing model seriously, perhaps too seriously, and expanding it to multimedia. There may be a surprising end run by more design-based software and networking technology. Current work with Lego at the Media Lab includes a computer-in-a-brick prototype, which demonstrates a further degree of flexibility and opportunity for constructivism. It includes interbrick communications and opportunities to explore parallel processing in ways that none of us could. Kids using this today will learn physical and logical principles you and I learned in college. Imagine a Lego set in the year 2000, where each brick says: "Intel inside." Next Issue: Prime Time is My Time [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.07 July 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-07.html (3 of 3) [28-4-2001 14:10:49]

WIRED 2.06 - Less Is More: Interface Agents as Digital Butlers

NEGROPONTE

Message: 12 Date: 6.1.94 Less Is More: From: <nicholas@media.mit.edu> Interface Agents as To: <lr@wired.com> Digital Butlers Subject:
Al Gore need not be right or wrong in his conception of details. It almost doesn't matter whether he calls it an information superhighway, an infobahn, or a National Information Infrastructure. What matters is his personal and sincere interest in computers and communications and the fact that his enthusiasm has raised our popular consciousness of telecommunications. The media cacophony over phenomena like the Internet fosters an open architecture and emphasizes access by all Americans. The clamor, however, has perpetuated a tacit assumption that more bandwidth is an innate, a priori, and (almost) constitutional good. The right to 1,000 channels of TV! Continental Cable, the local cable company in Cambridge, Massachusetts, now offers Internet access at 500,000 bits per second. With that service, The Wall Street Journal takes sixteen seconds to transmit in its entirety (as structured data mostly, not fax, please!). When fiber reaches the home, by some estimates, we will have access to as much as 100 billion bits per second. Hmmm. Most people generally make a false assumption that more bits are better. More is more. In truth, we want fewer bits, not more. Our needs fall along a spectrum. Consider a newspaper: Our requirements are very different on Monday morning from what they were on Sunday afternoon. At 7 a.m. on a workday, you are less likely to be interested in browsing stories. Serendipity just does not play a key role then. In fact, you would most likely be willing to pay The New York Times US$10 for ten pages vs. $1 for 100 pages. If you could, you would opt for a heavy dose of personalized news. It's simple: Just because bandwidth exists, don't squirt more bits at me. What I really need is intelligence in the network and in my receiver to filter and extract relevant information from a body of information that is orders of magnitude larger than anything I can digest. To achieve this we use a technique known as "interface agents." Imagine a future where your interface agent can read every newspaper and catch every broadcast on the planet, and then, from this, construct a personalized summary. Wouldn't that be more interesting than pumping more and more bits into your home?

http://www.media.mit.edu/~nicholas/Wired/WIRED2-06.html (1 of 3) [28-4-2001 14:10:51]

WIRED 2.06 - Less Is More: Interface Agents as Digital Butlers

Guides
Why do people pay 85 cents to find out whether their one daily lottery ticket won? TV Guide has been known to make larger profits than all four networks combined. What do these things tell you? It says that the value of information about information can be greater than the value of the information itself. From that and other similar observations (American Airlines makes more from its reservation system than from carrying passengers) I am willing to project an enormous new industry based on a service that helps navigate through massive amounts of data. When we think of new information delivery, we tend to cramp our thoughts with concepts like "info grazing" and "channel surfing." These concepts just do not scale. With 1,000 channels, if you surf from station to station, dwelling only three seconds per channel, it will take almost an hour to scan them all. A program would be over long before you could decide whether it is the most interesting. I am fond of asking people how they select a theatrical, box-office movie. Some pretend they read reviews. I hasten to interject my own solution - which is to ask my sister-in-law - and people quickly admit that they have an equivalent. What we want to build into these systems is a sisterin-law, an interface agent which is both an expert on movies and an expert on you.

Your Model of Its Model of Your Model of It


The key to agent-based systems is learning. It is not a matter of a questionnaire or a fixed profile. Agents must learn and develop over time, like human friends and assistants. It is not only the acquisition of a model of you; it is using it in context. Timing alone is an example of how human agents distinguish themselves. But it is all too easy to wave your hand and say "learning." What constitutes learning? The only clue I have found goes back two decades to the work of the English cybernetician Gordon Pask, who taught me to look at the second- and third-order models. In human-tocomputer interaction, your model of the computer is less telling than its model of your model of it. By extension, your model of its model of your model of it is even more critical. When this thirdorder model matches the first (your model of it), we can say that you know each other.

Swiss Banking of Network-Based Agents


All of us are quite comfortable with the idea that an all-knowing agent might live in our television set, pocket, or automobile. We are rightly less sanguine about the possibility of such agents living in the greater network. All we need is a bunch of tattletale or culpable computer agents. Enough butlers and maids have testified against former employers for us to realize that our most trusted agents, by definition, know the most about us. I believe there is a whole new business in confiding our profiles to a third party, which will behave like a Swiss bank. I fear this will not be one of my credit card companies, which have sold my name for all sorts of purposes, and have thus shot themselves in the foot. It must be a
http://www.media.mit.edu/~nicholas/Wired/WIRED2-06.html (2 of 3) [28-4-2001 14:10:51]

WIRED 2.06 - Less Is More: Interface Agents as Digital Butlers

credible third party, perhaps a local telephone company, perhaps a long distance company like AT&T, perhaps a new venture altogether. What we should be looking for is an entity which is able and willing to keep our identities confidential while at the same time passing along newsworthy advertising and information. Such services will only work with a high degree of machine learning. While it is important to postulate such learning, how does this relate to human learning? Next Issue: Learning vs. Teaching [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.06 June 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-06.html (3 of 3) [28-4-2001 14:10:51]

WIRED 2.05 - Bit by Bit on Wall Street: Lucky Strikes Again

NEGROPONTE

Message: 11 Date: 5.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Bit by Bit on Wall Street: Lucky Strikes Again

There is no speed limit on the electronic highway. Change, whether technological, regulatory, or in the area of new services, is happening faster than I can believe - and I think of myself as an extremist when it comes to predicting and initiating change. To me, the current state of affairs is like driving on the autobahn at 160 kph. Invariably, just as I realize the speed I'm going, zzzwoom, a Mercedes passes, then another, and another. Yikes, they must be driving at 200 kph or 220 kph. Such is life in the fast lane of the infobahn, but nowhere more so than on Wall Street. Bob Lucky, Bellcore's vice president for applied research and a highly acclaimed author and engineer, noted recently (in "Looking Ahead at Telecommunications," Bellcore Exchange, November 1993) that he no longer keeps up to date technically by reading scholarly publications; instead he reads The Wall Street Journal. As usual, Bob is right. The reason for this phenomenon is simple: The future of the computer and communications industries will be driven by applications, not by scientific breakthroughs like the transistor, microprocessor, or optical fiber. The problems now stem not from basic material sciences but from basic human needs. To focus on the future of the "bit" industry, there is no better place to set one's tripod than on the entrepreneurial, business, and regulatory landscape of the United States, with one leg each in the New York, American, and NASDAQ exchanges.

A Bit of Mickey Mouse


The recently completed battle between QVC and Viacom for Paramount must remind analysts of the duel between Abdullah Bulbul Amir and Ivan Petrofsky Skovar, in which each dies on the other's sword. The winner is the loser. Maiden Paramount has come down with a case of acne since the courtship started, but remains, nonetheless, a beautiful catch because she boasts a wide variety of bits. And it's the bits that count, stupid. However, it is not just the number of bits but their variety that matter (book bits, sound bits,
http://www.media.mit.edu/~nicholas/Wired/WIRED2-05.html (1 of 3) [28-4-2001 14:10:53]

WIRED 2.05 - Bit by Bit on Wall Street: Lucky Strikes Again

movie bits, even hockey-playing bits). The reason is simple: As everything becomes digital, the bits commingle (that's called multimedia), and they leak into the interstices of humanity, previously unreachable by the delivery of physical matter (that's called new markets). If your company makes only one kind of bit, you are not in very good shape for the future; both Sumner Redstone and Barry Diller know that. The Paramount story is about bits, not egos. All of a sudden, companies see the opportunity not only to resell their archived bits but to mix and match, to augment, and to personalize information and entertainment. The more a bit can be put to use or recycled, the more it is worth. In this regard, a Mickey Mouse bit is probably worth a lot more than a Star Trek bit. My goodness, Mike Eisner's bits even come in lollipop form. More interestingly, his guaranteed audience is refueled at a rate that exceeds 100 million births each year. I am certainly betting on Disney's bits.

Bit Transportation
I cannot think of a worse business to be in than the transport of bits - worse than the airline business with its fare wars. Consider, the business is regulated to such a degree that NYNEX must put telephone booths (which last all of 48 hours) in the darkest corners of Brooklyn, while its unregulated competition will put its booths on Fifth and Park avenues. That's only the beginning: Now the digital era emerges, and bits need to be priced differently. Surely none of us is going to pay the same for a movie bit (there are about 10 billion of them in a very highly compressed digital movie) as we will for a conversation bit (there are only 100 million of them in a highly data-compressed, two-hour conversation). Consider your mother-inlaw's return home from the hospital and her need for an open line, 24 hours per day, just to monitor a half-dozen bits per hour. Try figuring out that business model! Or what about the 12year-old kid doing his homework, who should have access to WIRED's content for nothing while Wall Street analysts should pay a fair price. It is not difficult to speculate. If management limits a telecommunications company's long-term strategy to carrying bits, it will not be acting in the shareholders' interest. Owning the bits or rights to the bits, or adding significant value to the bits, must be a part of the equation for telecommunications success. Otherwise, there will be no place to add value, and telco operators will be stuck with a service fast becoming a commodity, the price of which will go down further and further.

It May not Be Necessary to Covet thy Neighbor's Bits


Nintendo and Sega have taught the world a big lesson. Their games represent a business that is larger than the American motion picture industry and growing much faster as well. We are relearning that the money to be made is in the blades, not the razors. That's not a new idea. Wall Street investment scion Warren Buffet knew that when he bought into Gillette.

http://www.media.mit.edu/~nicholas/Wired/WIRED2-05.html (2 of 3) [28-4-2001 14:10:53]

WIRED 2.05 - Bit by Bit on Wall Street: Lucky Strikes Again

Computer companies have been positioning themselves as software companies for years. By software they usually meant tools, sometimes end-user systems. A change is afoot. And, no, I'm not going to tell you about the multimedia industry, again. What I am talking about is information about information, and the processes by which we filter the onslaught of bits. The computer industry's blades may not only be modeled after Bambi or Tetris. Instead, I see a huge market in the agent business, modeled more after the added value of an English butler or the Librarian of Congress. Yes, making and owning the bits is certainly better than simply carrying, storing, or churning them. But there may be another bit business: understanding the bits. So far, in the theater of Wall Street, the personal information filter business has only played a bit part. I assure you that it will be tomorrow's lead role on the stage of success. Next Issue: Digital Butlers: Interface Agents [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.05 May 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-05.html (3 of 3) [28-4-2001 14:10:53]

WIRED 2.04 - The Fax of Life: Playing a Bit Part

NEGROPONTE

Message: 10 Date: 4.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

The Fax of Life: Playing a Bit Part

People are startled when I criticize the fax machine and accuse it of retarding the ascension of computer-readable information. I truly believe that the fax machine has been a serious blemish on the computer landscape, the ramifications of which we will feel all too soon. But the typical response to such a statement is: "What do you mean? The advent of the fax has been extremely positive." The fax is a step backward because it does not contain "structured data," but rather an image of text that is no more computer-readable than this page of Wired (unless you are reading it on America Online). Even though the fax is delivered as bits before it is rendered as an image on paper, those bits have no symbolic value. If, 25 years ago, we (that is, some of us in the scientific community) could have been overheard predicting the percentage of text that would be computer-readable by the turn of the millennium, the percentages would have been as high as 90 or 95 percent. But then, boom, around 1980 the previous steady growth in computer-readability took a nose-dive because of the fax. This magazine page, without my picture, takes about 20 seconds to send by fax. At 9,600 bps, this represents approximately 200,000 bits of information. On the other hand, using electronic mail, only a quarter of those bits are necessary: the ASCII and some control characters. In other words, if you charge me per bit to transmit this page, not only is e-mail better, because it is computer-readable, but it will cost less than a quarter of the fax price. Who's fooling whom and why did this happen?

A Japanese Legacy
To understand the fax, one must understand Japan, Kanji, and iconic "alphabets" (full Kanji, for example, has over 60,000 symbols). As recently as ten years ago, Japanese business was not conducted via letter but by voice, usually face to face. Few businessmen had secretaries, and documents were written, often painstakingly, by hand. The equivalent of a typewriter looked more like a typesetting machine,
http://www.media.mit.edu/~nicholas/Wired/WIRED2-04.html (1 of 3) [28-4-2001 14:10:55]

WIRED 2.04 - The Fax of Life: Playing a Bit Part

with an electromechanical arm positioned over a dense template of choices to produce a single Kanji symbol. It goes without saying that a string of 8 bits, like ASCII, was insufficient to represent the full set of choices. The pictographic nature of Kanji made the fax a natural. Since little Japanese was then (and is now) in computer-readable form, there was (and is) no comparable loss. In a very real sense, fax standardization, lead by Japanese companies, gave great short-term favor to their written language but resulted in great long-term harm to ours. I have heard estimates that as much as 70 percent of telephone traffic across the Pacific today is fax, not voice. Like the answering machine, the fax is a blessing to the phone companies.

E-mail Is the Right Way


Use of e-mail is also exploding. In some respects, the invention of e-mail is much more recent than that of the fax, which can be traced to the early 1900s. However, general use of e-mail predates the general use of fax. E-mail started during the middle and late 1960s. The slow and steady growth of e-mail continued through the 1970s and then was dramatically overtaken by fax communication. But this is now changing. Today, there are about 40 million "long-distance" e-mail users, and that number is said to be growing by more than 10 percent per month. This does not include the countless closed systems through which a small set of users send messages among themselves. By the turn of the century almost everyone will be using e-mail, not fax. What about photographs, graphics, and richer typography? These will come with pagedescription languages, which exist widely today but have no commonly accepted standard. For this reason, e-mail today is typographically parsimonious, only one step beyond the upper-caseonly vernacular of telegrams. E-mail is data that can be filtered, sorted, retrieved, and edited. Its form makes it meaningful to computers, as well as people. Unlike the fax, e-mail represents the alphanumeric structure of a message. Such structure has wide implications.

Football as a Model
Television is "moving" fax. The Economist estimates that less than 1 percent of the world's information is in digital form. This estimate certainly appears accurate when considering photographs, film, and video, all of which require so many bits. However, the statistic does not reflect the fact that when many of those media are digital, they are neither more nor less computer-understandable than they are today.

http://www.media.mit.edu/~nicholas/Wired/WIRED2-04.html (2 of 3) [28-4-2001 14:10:55]

WIRED 2.04 - The Fax of Life: Playing a Bit Part

Consider your audio CD (audio fax, if you will), which is indeed digital, but not structured, data. So far, the closest example to audio ASCII is musical notation as we know it in scores. A football game, recorded and transmitted via digital or analog video, has no structure. Each frame functions like a fax. The alternative is to capture the game as a model, with each player represented as a complex mathematical marionette, whose kinematics can be derived by a sensor and transmitted to your receiver (4-Dimensional ASCII). At the receiver, not in the camera, the representation is "flattened" onto the screen or displayed holographically. Not only can the game be seen from any perspective, but the computer can reconstruct plays as diagrams, compare the tactics of one play with a previous one, show it from the perspective of the quarterback, or make some canny predictions. My point, therefore, is more general than a flame at fax machines. It is a call for greater attention to the structure and content of bit streams, versus the wholesale digitizing of data. Being digital is not enough. When American Express began storing my credit card slips as images, my heart sank. They seemingly threw out the content of the transaction and saved only a picture of my payment. Similarly, I just don't believe that insurance adjustment forms need to be stored as pictures. We need the computer vendors to stop selling imaging systems to information providers. These are no more inspired or helpful than microfilm. It is time to buckle down and attack the hard problem of page, document, picture, and video description languages that allow for all our data streams to be in symbolic, not facsimile form. Otherwise, we are all being sold a "bit" of goods. Next Issue: Bit By Bit on Wall Street [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.04 April 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-04.html (3 of 3) [28-4-2001 14:10:55]

WIRED 2.03 - Talking With Computers

NEGROPONTE

Message: 9 Date: 3.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


"Okay, where did you hide it?" "Hide what?" "You know!" "Well, where do you think?" "Oh."

Talking With Computers

The scene comes from an MIT proposal on human-computer interaction submitted to ARPA twenty years ago by Chris Herot (now at Lotus), Joe Markowitz (now at the CIA) and me. It made two important points: Speech is interactive, and meaning - between people who know each other well - can be expressed in shorthand language that probably would be meaningless to others. It may be difficult for the reader to believe the degree to which speech I/O has been studied separately in the past. Like Benedictine monks, each research team developed and guarded a special voice input or output technique, rarely fussing over the conversational brew. Understanding speech as a component of a conversation is very different to understanding it as a monologue.

Ah Ha!
I have told the following story a million times (admittedly, a figure of speech!). In 1978, our lab at MIT was building a management information systems for generals, CEOs, and 6-year-old children, namely an MIS system which could be learned in less than ten seconds. As part of this project we received NEC's top-of-the-line, speaker-dependent, connected speechrecognition system. Like all such systems, then and now, it was subject to error when the user showed even the lowest level of stress in his or her voice. Mind you, this would not necessarily

http://www.media.mit.edu/~nicholas/Wired/WIRED2-03.html (1 of 3) [28-4-2001 14:10:57]

WIRED 2.03 - Talking With Computers

be audible to you or me. ARPA, the sponsors of that research, made periodic "site visits" to review our progress. On these occasions, the graduate students prepared what we thought were bug-free demonstrations. We all wanted the system to work absolutely perfectly during these reviews. The very nature of our earnestness produced enough stress to cause the system to crash and burn in front of the ARPA brass. Like a self-fulfilling prophecy, the system almost never worked for important demos; our graduates were just too nervous and their voices reflected their condition. A few years later, one student had an idea: Find the pauses in the user's speech and program the machine to generate the utterance, "ah ha," at judicious times. Thus, as one spoke to the machine, it would periodically say: ah hha, ahhh ha, or ah ha. This had such a comforting effect (it seemed that the machine was encouraging the user to converse), that the user relaxed a bit more and the performance of the system skyrocketed. Our idea was criticized as sophisticated charlatanry. Rubbish. It was not a gimmick at all, but an enlightened fix. It revealed two important points: For one, not all utterances need have lexical meaning to be valuable in communications; for another, some utterances are purely protocols, like network handshaking. Think of yourself on the telephone. If you do not say "ah ha" to the caller at appropriate intervals, the person will become nervous and, ultimately, inquire: "Are you there?" You see, the "ah ha" is not saying "yes," "no," or "maybe," but is basically transmitting one bit of information to say, "I'm still here and listening." The reason for revisiting this long story is that some of the most sophisticated people within the speech recognition community failed to understand what I have just illustrated. In fact, in many labs today, speech recognition and production are still studied in different departments or labs! I frequently ask, "why?" One conclusion is that these people are not interested in communication, but transcription. That is to say, people in speech recognition wish to make something like a "listening" typewriter which can take dictation and produce a document. Good luck! People are not good at that. Have you ever read a transcription of your own speech? Instead of transcription, let's look at speech as an interactive medium, as part of a conversation. This perspective is well presented in the forthcoming book by Chris Schmandt entitled Voice Communication with Computers: Conversational Systems, (Van Nostrand Reinhold, 1994).

Table Talk
Talking with computers goes beyond speech alone. Imagine the following situation. You are sitting around a table where everyone but you is speaking French, but you do not speak French. One person turns to you and says: "Voulez-vous encore du vin?" You understand
http://www.media.mit.edu/~nicholas/Wired/WIRED2-03.html (2 of 3) [28-4-2001 14:10:57]

WIRED 2.03 - Talking With Computers

perfectly. Subsequently, that same person changes the conversation to, say, politics in France. You will understand nothing unless you are fluent in French (and even then it is not certain). You may think that "Would you like some more wine?" is baby-talk, whereas politics requires sophisticated language skills. So, obviously the first case is simple. Yes, that is right, but that is not the important difference between the two conversations. When the person asked you if you wanted more wine, he or she probably had an arm stretched toward the wine bottle and eyes pointed at your empty wine glass. Namely, the signals you were decoding were parallel and redundant, not just acoustic. Furthermore, all the subjects and objects were in the same space and time. This is what made it possible for you to understand. The point is that redundancy is good. The use of parallel channels (gesture, gaze, and speech) should be the essence of human-computer communications. In a foreign land, one uses every means possible to transmit intentions and read all the signals to determine even minimal levels of understanding. Think of a computer as being in such a foreign land, ours, and being expected to do everything through the single channel of hearing. Humans naturally gravitate to concurrent means of expression. Those of you who know a second language, but do not know it very well, will avoid, if at all possible, using the telephone. If you arrive at an Italian hotel and find no soap in the room, you will go down to the concierge and use your best Berlitz to ask for soap. You may even make a few bathing gestures. That says a lot. When I talk with my computers in the future, I will expect the same plural interface. If I do too much talking at one of my computers, I will not be surprised if it asks me one day, "Can we have a conversation about this?" Next Issue: The Fax of Life [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.03 March 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-03.html (3 of 3) [28-4-2001 14:10:57]

WIRED 2.02 - Talking to Computers: Time for a New Perspective

NEGROPONTE

Message: 8 Date: 2.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Talking to Computers: Time for a New Perspective

In contrast to the gain in graphical richness of computers, speech recognition has progressed very little over the past fifteen years. And yet, fifteen years from now, the bulk of our interaction with computers will be through the spoken word. It is time to move on this interface backwater and correct the fact that computers are hearing impaired. In my opinion, the primary reason for so few advances is perspective, not technology. People have been working on the wrong problems and hold misguided views about the voice channel. When I see speech recognition demonstrations or advertisements with people holding microphones to their mouths, I wonder: Have they really overlooked the fact that one of the major values of speech is that it leaves your hands free? When I see people with their faces poked into the screen - talking - I wonder: Have they forgotten that the ability to function from a distance is a reason to use voice? In short, most people developing speech systems need a lesson in communications interfaces.

Speech Goes Around Corners


Using computers today is so overt that the activity demands absolute and full attention. Usually, you must be seated. Then you must attend, more or less exclusively, to both the process and content of the interaction. There is almost no way to use a computer in passing or to have it be one of several conversations. This is oversight number one. Computing at and beyond arm's length is very important. Imagine if talking to a person required that his or her nose always be in your face. We commonly talk to people at a distance, we momentarily turn away and do something else, and it is not uncommon to be out of sight while still talking. That is what I want to be able to do with a computer: have it be in "earshot." But this requires an aspect of speech input that has been almost totally ignored: sound separation and capture. It is not trivial to segregate speech from the sounds of the air conditioner or an airplane overhead. But such separation is crucial because speech has little value if the user is limited to talking from one noise-free place.
http://www.media.mit.edu/~nicholas/Wired/WIRED2-02.html (1 of 3) [28-4-2001 14:10:59]

WIRED 2.02 - Talking to Computers: Time for a New Perspective

Aural Text
Oversight number two: Speech is more than words. Anyone who has a child or a pet knows that what is said can be as important as how it is said. In fact, dogs respond to tone of voice more than any innate ability to do complex lexical analysis. I frequently ask people how many words they think their dogs know and I have received answers as high as 500 to 1,000. I suspect the number is closer to 20 or 30. Spoken words carry a vast amount of information beyond the words themselves, which is something that my friends in speech recognition seem to ignore. While talking one can convey passion, sarcasm, exasperation, equivocation, subservience, exhaustion, (and so on) with the exact same words. In speech recognition, these subcarriers of information are ignored or, worse, treated as bugs rather than features. They are, however, the very features that make speaking a richer medium than typing.

The Three Dimensions of Speech


Speech recognition can be viewed as a problem defined by three axes: vocabulary size, degree of speaker independence, and the extent to which words can be slurred together (their connectedness). Think of this as a cube, whose lower left-hand near corner is a small vocabulary of totally speaker-dependent words, that must be uttered with distinct pauses between each. This is the simplest corner of the problem space. As you move out along any axis, making the vocabulary larger, making the system work for any speaker, or allowing words to be run together, speech recognition gets harder and harder for the computer. In this regard, the upper right-hand far corner of this cube represents the most difficult place to be. Namely, this is where we expect the computer to recognize any word, spoken by anybody, "inneny" degree of connectedness. A common assumption has been that we must be far out on all three of these axes for speech recognition to be at all useful. I do not agree. One might ask, when it comes to vocabulary size, how big is big enough: 500, 5,000, or 50,000 words? The question is wrong. It should be: How many recognizable words need to be in the computer's memory at any one time? This question suggests subsetting vocabularies, such that chunks can be folded into the machine as needed. When I ask my computer to place a phone call, my Rolodex is loaded. When I am planning a trip, the names of places are there instead. If one views vocabulary size as the set of words needed at any one time, then the computer needs to select from a far less daunting number of words; closer to 500 than to the superset of 50,000. Looking at speaker independence: Is this really so important? I believe it is not. In fact, I think I would be more comfortable if my computer were trained to understand my spoken commands
http://www.media.mit.edu/~nicholas/Wired/WIRED2-02.html (2 of 3) [28-4-2001 14:10:59]

WIRED 2.02 - Talking to Computers: Time for a New Perspective

and maybe only mine. The presumed need for speaker independence is derived in large part from earlier days, when the phone company wanted anybody to be able to talk to a remote database. The central computer needed to be able to understand anybody, a kind of "universal service." Today, we can do the recognition in the handset, so to speak. What if I want to talk with an airline's computer from a telephone booth? I call my computer or take it out of my pocket and let it do the translation from voice to ASCII. Once again, we can do a great deal at the "easier" end of this axis. Finally, connectedness. Surely we do not want to talk to a computer like a tourist addressing a foreign child, mouthing each word as if in a locution class. Agreed. And this axis is the most challenging in my mind. But even here, there is a way out in the short term: Look at vocabulary as multiword utterances, not as just single words. These utterances can be short, slurred phrases of all kinds, which endow the machine with sufficient connected speech recognition to be very useful. In fact, handling runtogetherspeech in this fashion may well be part of the personalization and training of my computer. My purpose is not to argue any one of these three points to death, but to show more generally that one can work much closer to the easiest corner of speech space than has been assumed and that the hard and important problems are elsewhere. Said in another way: It is time to look at talking from a different perspective. Next Issue: Talking WITH Computers [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.02 February 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-02.html (3 of 3) [28-4-2001 14:10:59]

WIRED 2.01 - Aliasing: The Blind Spot of the Computer Industry

NEGROPONTE

Message: 7 Date: 1.1.94 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


From Mistake to Mascot

Aliasing: The Blind Spot of the Computer Industry

Have you ever wondered why your computer screen has jagged lines? Why do pyramids look like ziggurats? Why do uppercase E, L, and T look so good, yet S, W, and O look like badly made Christmas ornaments? Why do curved lines look like they've been drawn by someone with palsy? I've met people who think these staircase artifacts are intrinsic to computer displays - more or less a given with which they must live. After all, we've watched enough Westerns and seen stage- coach wheels go backwards, and we don't flame the movie studios. Well, this month's column is my flame to almost every computer manufacturer and software developer on the planet. People are tired of your jaggies. It's time to correct your offensive fonts and graphics. And, as you know, it is not hard to do. Here's an irony. Remember those funny fonts taken from magnetically sensitive characters on checks? One font was even given a name: MICR (my guess is that this is an acronym for Magnetic Ink Character Recognition). During the 1960s and 1970s, graphic designers frequently used MICR to cast a look and feel to the electronic age. We are doing this all over again in the 1980s and 1990s with aliased fonts (so far nameless), frequently used in graphic design to signal "computer." Before this mascot does get a name, let's correct it, because today there is no need for lines and characters to be anything less than print quality and perfectly smooth. I won't go into the added irritation we encounter in animation. As an image moves, the jagged little steps come and go, increase and decrease in number, and move in all sorts of counterintuitive directions. The passenger beside me on the plane, as I wrote this, was playing a golf game on his laptop and did not seem to be fazed by the fact that the golf club went from being perfectly straight to being a staircase with moving steps. When I pointed this out, he suddenly found the game too annoying to play (sorry about that). He reacted with disbelief when he learned how unnecessary this condition is.

http://www.media.mit.edu/~nicholas/Wired/WIRED2-01.html (1 of 3) [28-4-2001 14:11:01]

WIRED 2.01 - Aliasing: The Blind Spot of the Computer Industry

Why Can't New Dogs Learn Old Tricks?


The techniques of getting rid of the jaggies, called "anti-aliasing," were developed 25 years ago and can be credited to at least three research centers: Xerox PARC, the University of Utah, and MIT. Researchers (I include myself) observed, for example, that video cameras produce images of lines and letters that do not have staircases. Without going into technical detail, suffice to say that the graytone of those images allows for smoothness. By adding levels of gray or tonal depth (i.e. bits in the z-axis), one gains perceived spatial resolution (i.e. the plane of x and y). In other words, by putting your bits in z, you get better x-y resolution than if you put them directly into x and y. Time and time again, it has been proved that the human eye is better served by putting more memory (more levels of gray) into the z-axis than by adding more pixels per inch. Part of the confusion and historical stubbornness in this area stems from a lack of awareness about the difference between half-tones in print media and the continuous tone of video. The pixels on your screen are not, repeat not, like half-tone dots (the use of the word "dot" here is misleading). The dots in newspaper and magazine pictures are, in fact, not dots at all. They are oddly shaped, amorphous blobs of ink that occupy a printed "cell" in accordance with the level of gray in the image being screened (what we call half-toning). The dots on your display screen are not amorphous, but they can have graytone, which is the whole point. So, why aren't all computer displays anti-aliased? Here is the excuse: When a character or line is de-jaggied, it must be computed in accordance with the background. Just think of a black diagonal line passing over both a white and gray patch. The levels of gray that make the line look smooth go from black to white in the first case, but from black to gray in the second. One must look at what is there before proceeding willy nilly, writing lines and characters. And the excuse goes on. If, suddenly, any part of the background changes, information must be at hand to recompute and re-anti-alias. Ten years ago, one could stomach the argument that this was sufficiently difficult and time consuming to conclude that computer power was best spent elsewhere. Also, it was the case that graytone, let alone color, was not common in many displays (full color is just three sets of gray tone, one each for red, green, and blue). Tomorrow it will be almost unthinkable to work in anything but full color, and today even the least expensive computers have graytone.

Jaggies Should Be An OSHA Violation


What puzzles me the most is that we seem to have educated an entire generation of computer scientists who don't fully understand this simple phenomenon, and we seem to have trained the public to take it for granted. Perhaps it's time to make aliased graphics a violation of Occupational Safety and Hazards Administration minimum standards for display quality. Or, perhaps the Environmental Protection Agency can declare this condition to be visual pollution. The point is that it must stop.
http://www.media.mit.edu/~nicholas/Wired/WIRED2-01.html (2 of 3) [28-4-2001 14:11:01]

WIRED 2.01 - Aliasing: The Blind Spot of the Computer Industry

I would have expected Japan to be a greater force in this area, because Kanji benefits even more than the Latin alphabet from the resolution added by graytone. I would have expected Europe to be more active, since there is much EC legislation concerning computer screen characteristics. I would have expected the United States to implement anti-aliasing, if only because its theoretical and practical roots are in America. But, alas, the ambivalence is worldwide. As we rush into a world of sophisticated games, electronic books, and multimedia everything, we will invariably see more and more jaggies and more and more people will assume they are intrinsic. They are not. If you don't believe me, ask a computer science friend. There really is no excuse any more. So wake up: Apple, IBM, DEC, HP, Microsoft, and all you other companies. We're tired of the jaggies. Next Issue: Talking to Computers [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.01 January 1994.]

http://www.media.mit.edu/~nicholas/Wired/WIRED2-01.html (3 of 3) [28-4-2001 14:11:01]

WIRED 1.06 - Virtual Reality: Oxymoron or Pleonasm?

NEGROPONTE

Message: 6 Date: 11.1.93 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Virtual Reality: Oxymoron or Pleonasm?

I never knew the meaning of "pleonasm" until I recently listened to a lecture by Mike Hammer (not the detective, but the world's leading "re-engineer"). In his typically animated fashion, Hammer presented "corporate change" as an oxymoron on its way to becoming a pleonasm. Basically, a pleonasm is a redundant expression like "in one's own mind." It is the opposite of an oxymoron, which is an apparent contradiction like "artificial intelligence" or "airplane food." If prizes were awarded for the best oxymorons, "virtual reality" would certainly be a winner. Freshman physics teaches us about real versus virtual images. Classicists get a more complex dose of the same in their reading of Plato. But virtual reality - or VR - is becoming a pleonasm. If the words "virtual reality" are seen not as noun and adjective but as "equal halves," the logic of calling VR a pleonasm is more palatable. Basically, VR makes the artificial as realistic as the real. In flight simulation, its most sophisticated and longest-standing application, VR is more realistic than the real. Pilots are able to take the controls of fully loaded passenger planes for their first flight because they have learned more in the simulator than they could in a real plane. In the simulator, a pilot can be subjected to rare situations that, in the real world, would require more than a near miss. I have often thought that one of the most socially responsible applications of VR would be its required use in driving schools. Virtual reality can place drivers in perilous predicaments - on a slippery road, a child darts out from between two cars - that they may encounter in their cars. All of us hope we are never faced with such situations, and none of us knows how we might react. VR allows one to experience a situation "with one's own eyes" (another pleonasm). As the French journalist Rene Doutard wrote, "Courage is having done it before."

VR Then and Now


Neophytes have a mistaken sense that VR is very new because the press just learned about it. It is not. Almost 25 years ago, Ivan Sutherland developed, with support from ARPA, the first surprisingly advanced VR system. This may not be astonishing to old-timers, because Ivan seems to have had half the good ideas in computer science. However, Ivan's idea is now very affordable. One company, whose name I am obliged to omit, will soon introduce a VR display
http://www.media.mit.edu/~nicholas/Wired/WIRED1-06.html (1 of 3) [28-4-2001 14:11:03]

WIRED 1.06 - Virtual Reality: Oxymoron or Pleonasm?

system with a parts cost of less than US$25. The dress code for VR is a head-tracking helmet with goggle displays. The principle is simple: Put data where the person is looking and nowhere else. In donning such a display, the general locale of your gaze is a given and elementary optics can move an image from the tip of your nose to infinity. For a computer-graphics jock, the measures of reality are the numbers of polygons and/or edges a given image has, and the ability to apply textures to those images (considered cheating by some). Should you ask yourself, "What is the optimum number of edges and display resolution needed for photo-realistic imaging?" the answer is probably near you as you read this. Look out a window and imagine that window is a display. The argument will be made that head-mounted displays are not acceptable because people feel silly wearing them. The same was once said about stereo headphones. If Sony's Akio Morita had not insisted on marketing the damn things, we might not have the Walkman today. I expect that within the next five years more than one in ten people will wear head-mounted computer displays while traveling in buses, trains, and planes. That number could include pilots -who could be landing planes in low visibility wearing goggles that subtract the real fog. By the way, don't believe for a moment that all of our perceptions are derived from what we see. One of the most frequently cited studies conducted at the Media Lab was authored by Professor Russ Neuman, who proved that people saw a better picture when sound quality was improved. This observation extends to all of our senses as they work cooperatively. Some Department of Defense prototypes have shown that minor and random vibrations of a tank simulator platform induce an uncanny sense of extra visual realism.

The Couch Commando


The real issue and challenge in VR today is not the display, but how to reconcile a person's expectations of reality with what current systems deliver in terms of response time. In fact, all commercial systems, including those that will soon be brought to you by the major video game manufacturers, have a terrible lag. As you move your head, the image before you changes rapidly, but not rapidly enough. Even sophisticated flight simulators are lacking in this regard. When you look out that window, you take for granted that the mullions won't alias and jerk as you move your head from left to right. We grow up in a world that fosters immediacy in action and reaction. In fact, young children find it almost impossible to steer a motor boat because its response time is just too long. I played with VR systems fifteen or twenty years ago. With head-mounted glasses that were either piezo-ceramic shutters or polarized lenses, we could display images in stereo, squirting the proper view into each eye and thereby giving a sense of depth through binocular parallax.
http://www.media.mit.edu/~nicholas/Wired/WIRED1-06.html (2 of 3) [28-4-2001 14:11:03]

WIRED 1.06 - Virtual Reality: Oxymoron or Pleonasm?

This is commonplace today. But what I remember so vividly is that everyone - not most people, but literally everyone would, after putting these glasses on for the first time, immediately move their heads from side to side, looking for the images before them to reflect their expectations of realistic motion parallax. Usually the system did not perform. That human response, the "neck jerk" reaction, says it all. In VR, the frequency response of the system will be almost all that counts. While I am not aware of any such studies that would support the claim, I suspect that rapid response can be traded for resolution. If you look to the right or the left, you will be very dissatisfied if the landscape moves along jerkily, with spatial and temporal aliasing, because aliased VR is the oxymoron while VR itself will be the pleonasm, whether we like the ring of the words or not. Next Issue: Aliasing: The Technical Blindspot of the Computer Industry [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.06 November 1993.]

http://www.media.mit.edu/~nicholas/Wired/WIRED1-06.html (3 of 3) [28-4-2001 14:11:03]

WIRED 1.05 - Repurposing the Material Girl

NEGROPONTE

Message: 5 Date: 10.1.93 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Repurposing the Material Girl

The fact that, in one year, a 34-year-old former Michigan cheerleader generated sales in excess of $1.2 billion did not go unnoticed by Time Warner, which signed Madonna to a $60 million "multimedia" contract last year. At the time, I was startled to see "multimedia" used to describe a collection of unrelated traditional print, record, and film productions. Since then, I see the word almost every day in the Wall Street Journal, often used as an adjective to mean anything from interactive to digital to broadband. It would seem that if you are an information and entertainment provider who does not plan to be in the multimedia business, you will soon be out of business. What is this all about? It is about both new content and looking at old content in different ways. It's about one intrinsically interactive medium, made possible by a digital lingua franca: bits. And it's about the decreasing costs, increasing power, and exploding presence of computing in our daily lives: 47 percent of all PCs sold in 1992 went to the home market. This technological push is augmented by an aggressive pull from media companies, which are selling and reselling as many bits as possible, including Madonna's (which sell so well). This not only means reuse of data, music, and film libraries but also the expanded use of text, audio, and video for as many purposes as possible, in multiple packages and through diverse channels.

The Golden Fleece to the Golden Goose


In 1975, Richard Bolt, Andy Lippman, and I submitted a proposal called "Multimedia Computing" to the Defense Advanced Research Projects Agency (previously ARPA, then DARPA, now once again ARPA). It was accepted on the condition that we change the title to avoid the possibility of receiving the Golden Fleece Award from Senator Proxmire, an annual prize given to the most gratuitously funded government project. I had been nominated on several occasions but never tarred and feathered by the dubious honor. (In December, 1979 the Office of Education won the Fleece for spending $219,592 to develop a "curriculum package" to teach college students how to watch television.)

http://www.media.mit.edu/~nicholas/Wired/WIRED1-05.html (1 of 3) [28-4-2001 14:11:05]

WIRED 1.05 - Repurposing the Material Girl

Anyhow, it is interesting to observe that during the 1970s, "multimedia" meant "nightclubs." It carried the connotation of rock music plus light show. In 1978, when we showed a full-color, illustrated page of text on a computer screen, people gasped in astonishment when an illustration turned into a sound-synch movie at the touch of a finger. Some of today's best multimedia titles, like Robert Winter's Mozart, are high production value renditions of sloppy but seminal experiments from the 1970s. What today's titles share with the past is the simple idea that three discrete streams of data audio, video, and text - explicitly meet on the screen with an order imposed by astute synchronization. The current challenge in designing multimedia product is very much the organization of time, or what might be called "page layout" in the space of X, Y, and T. But multimedia can mean more.

The Message is the Medium


Modern multimedia, at least our thinking about it, must include the automatic transcoding from one medium into another, or the translation of a single representation into many media. Namely, modern multimedia should redefine our notions of a medium. WIRED's patron saint, Marshall McLuhan, was right about the medium being the message in the 1960s and 1970s. But that is not the case today. In a digital world the message is the message, and the message, in fact, may be the medium. Multimedia needs to include fluid movement from one medium to another, saying the same thing in different ways, calling upon one human sense or another, depending on what you are doing. Books that read themselves when you are dozing off, or movies that explain themselves with text are good examples. The salient still, a recent breakthrough at the Media Lab, is an even better illustration of transcoding in multimedia. The original problem addressed by Walter Bender, a founding member of the MIT Media Lab, was: How could video be printed in such a way that the resolution of the still image would be an order of magnitude greater than any one frame? A single frame of video has very low resolution in comparison to photos. The answer, clearly, was to pull resolution out of time and look at many frames both forward and backward in time. Today, Bender makes high-quality video prints from crummy 8mm video. These stills have in excess of 5,000 lines of resolution. This means that any frame from the billions of hours of 8mm home movies stored in the shoeboxes of American homes can be turned into a Christmas card or printed for a photo album with as much or more resolution as a normal 35mm snapshot. However, something much more than resolution results. The print captures an image that never existed. Instead, it represents a static window of many seconds of time. During that time the camera may zoom and pan, and objects in the scene may move. The image is nonetheless crisp and perfectly resolved. Its contents reflect the filmmaker's intentions by putting more resolution in places where the camera zoomed or by widening the scene if it panned. Quickly
http://www.media.mit.edu/~nicholas/Wired/WIRED1-05.html (2 of 3) [28-4-2001 14:11:05]

WIRED 1.05 - Repurposing the Material Girl

moving elements, like a person walking across a stage, drop out in favor of the temporarily stable ones. What occurs in this example of "multimedia" is important: Movement from one medium to the next requires transcoding one dimension (time) into another dimension (space). We have simple examples in our daily lives, where, for instance, a speech (the acoustic domain) is transcribed with punctuation (the text domain) to render a small semblance of intonation. In the script for a play, much more is added in parentheses to characterize action. True multimedia, not all of which has to be explicit sound and light on the screen (some of it can be in your head), will include the automation of transcoding from one medium to the next because people will not be satisfied with the assumption that they only can be seated in front of an array of playback machines lashed together by a gaggle of wires. We are just as likely to want teleconferencing output, for example, on a Personal Digital Assistant as we are on a fullblown "virtual reality" system worn over our heads. In short, ubiquity is more important to multimedia than is explicit immersion. Next Issue: Virtual Reality - Oxymoron or Pleonasm? [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.05 October 1993.]

http://www.media.mit.edu/~nicholas/Wired/WIRED1-05.html (3 of 3) [28-4-2001 14:11:05]

WIRED 1.04 - Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV

NEGROPONTE

Message: 4 Date: 8.1.93 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Set-Top Box As Electronic Toll Booth: Why We Need OpenArchitecture TV

Is Bill Gates using John Sculley's speeches to guide his alliances? The makers of computer hardware and software evince uncanny synchronism in their lusting toward the cable industry. This is not surprising when we consider that ESPN has more than 60 million subscribers. Microsoft, Silicon Graphics, Intel, IBM, Apple, and HP have all entered major agreements with the cable industry. The object of this ferment is the set-top box, currently little more than a plug adapter but destined to be much more. At the rate things are going, we may soon have as many types of set-top boxes as we now have infrared remotes. Such a smorgasbord of incompatible systems is a horrible thought. The passion for this box stems from its potential function as, among other things, a gateway through which the "provider" of that box and its interface can become a gatekeeper of sorts, charging onerous fees for information as it passes through the gate and into your home. While this sounds like a dandy business, it is unclear if it's in the public's best interest. Worse, a settop box itself is short-sighted and the wrong cynosure. We should broaden our vision and set our sights instead on open-architecture television (OATV).

Smart Boxes Are Not Enough


What's going wrong? It's simple. Even the most conservative broadcast engineers agree that the difference between a television and computer will be limited, eventually, to peripherals and to the room in which it is found. Nonetheless, this vision is being compromised by monopolistic tendencies and by an incremental improvement of a box to control 1,000 programs, 999 of which we don't want to watch. Nonetheless, OATV - where computers and television become one - has so far been seriously out-boxed in the first round. The word "box" carries all the wrong connotations, but here's the theory. Our insatiable appetites for bandwidth (see my column in WIRED 1.3) put cable television in the lead position as a provider of information and entertainment services on demand. Cable services today include set-top boxes because only a fraction of TV receivers are cable-ready. Given the
http://www.media.mit.edu/~nicholas/Wired/WIRED1-04.html (1 of 3) [28-4-2001 14:11:07]

WIRED 1.04 - Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV

acceptance of this box, the idea is to aggrandize it with additional functions - give it long pants, so to speak. But this cannot be the right approach.

The Under-Set Pizza Box


When Sun Microsystems introduced the SparcStation 1 in 1989, its chassis had the form of a pizza box, which launched a trend for under-monitor electronics containing all the elements of an "open system" (credited to Sun as well). Map this simple change of thinking into a television receiver, and imagine an under-set configuration with more computer-like modularity and an expandable chassis. In such a world, the future of television is more clearly seen as data broadcast. At first, most of the data will be video, but eventually there will be other services, including data about the data (as I suggested in the last issue, the medium is the model). An open systems approach is likely to foster the most creative energies for new services and be a vehicle for the most rapid change and evolution. I have argued that number of scan lines, frame rate, aspect ratio, pixel shape, and "interlace" versus "progressive" scan rates were non-issues and should be variables, not religions or laws. During a recent congressional hearing, Representative Ed Markey patiently listened for over an hour to arguments about why interlace and square pixels would help job growth in America. Give me (and him) a break. Of course interlace has little place in the future, and perhaps someone will come up with an interesting application for non-square pixels (hard to imagine), but to suggest that either of these be legislated is simply silly. Let the under-set computing engine worry about that, not Congress or the FCC.

OATV Can Be Topless


It is time to make the leap to an OATV world, not limit our vision to an expanded and proprietary cable set-top box. In an OATV world, the monitor itself can be an option. The bits may be video, or they may not be. They may be audio or data destined for online services or inhome printing of a personalized newspaper. It's as if one must yell at the concatenated players of the FCC's HDTV bake-off - the so-called Grande Alliance - "It is not just about TV." The recent alliance of Microsoft, Intel, and General Instruments (GI) to develop a set-top box and TV operating system is very revealing. Add GI's predominance in the cable industry to the enormous power and computer savvy of Intel and Microsoft, and the result is a formidable consortium, if not a cartel. Why would this truly "grande" alliance focus on the set-top box? Surely the members must see the bigger picture. (Perhaps the problem is that Microsoft thinks MS-DOS is an open system.)

Why Open Systems Are Important


Open systems are not just about being well documented. They have other properties deeply rooted in the simplicity of an extensible standard, most or all of which could and should be in the public domain. This is important to the growth of new services, third-party equipment, and the kind of international sharing that makes the Internet such a phenomenon.
http://www.media.mit.edu/~nicholas/Wired/WIRED1-04.html (2 of 3) [28-4-2001 14:11:07]

WIRED 1.04 - Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV

Why not learn from Wang, Data General, and Prime? What those once high-flying companies had in common was a total disregard for open systems. Open systems exercise the entrepreneurial part of our economy and call into question proprietary systems and broadly mandated monopolies. In an open system we compete with our imagination, not with a lock and key. The result is not only a large number of successful companies, but a wide variety of choice for the consumer and an ever more nimble commercial sector, one that can change and grow. This may not work for automobile manufacturers but it does for the computer industry and it can work for television. The reason is simple: None of us give a damn about the box; we care about programming. Just as software and system services drive the computer industry, programming and intelligent browsing aides will drive the television industry. Ask yourself: Under which scenario will we see new media and the most innovative content - one featuring an enlarged set-top box, or one featuring open architecture television? Next Issue: Modern Multimedia [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.04 August 1993.]

http://www.media.mit.edu/~nicholas/Wired/WIRED1-04.html (3 of 3) [28-4-2001 14:11:07]

WIRED 1.03 - Debunking Bandwidth: From Shop Talk to Small Talk

NEGROPONTE

Message: 3 Date: 6.1.93 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Debunking Bandwidth: From Shop Talk to Small Talk

When I was an Assistant Professor of Computer Graphics at MIT in the late '60s, my career had little meaning at a dinner party. Computers were totally outside everyday life. I recall one Boston Brahmin who thought that a joy stick was a sex object. Today, I hear 60-year-old tycoons boasting about how many bytes of memory they have in their Wizards, and the capacity of their hard disks. Others talk half-knowingly about the speed of their processors (thanks to "Intel Inside") and affectionately (or not) about the flavor of their operating systems. I recently met one socialite who provides consulting services; her business card reads "I do Windows." Bandwidth is different; it remains a mystery to most. This is true because we often have too much when we don't need it or too little when we do. In addition, we scarcely understand the trade-off between bandwidth and intelligence. If computer companies were the only players in our wired lives, we would experience a greater tendency to compute (apply intelligence) at the periphery of the network rather than shipping bits back and forth in wholesale fashion. The computer culture has learned from human interface research that the most supreme form of interaction is the lack of it. Less is more.

Fire Hose Providers


Telephone and cable companies have a different view. It is in their interests to ship as many bits as they can. Look at the fax machine, a perfect example of channel capacity (albeit limited to 9,600 baud today), which allows us to ship pages in exactly the wrong way. Had the world been saddled with the 110-baud rate of the Teletype (requiring about 20 to 30 minutes for transmission of a one-page facsimile), ASCII and page description languages (PDLs) would have prospered, thereby avoiding the extraordinary nosedive of computer-readable information in the past decade. I actually have heard a sophisticated computer scientist suggest the facsimile storage of books, newspapers, and magazines for shipment via gigabit pipes. This suggests an ignorance of the value of computer readability and an allergy to hard problems such as intelligent PDLs.
http://www.media.mit.edu/~nicholas/Wired/WIRED1-03.html (1 of 3) [28-4-2001 14:11:09]

WIRED 1.03 - Debunking Bandwidth: From Shop Talk to Small Talk

The Phone Companies Believe Their Own Arguments


Judge Greene made a terrible mistake when he barred the Regional Bell Operating Companies (RBOCs) from entering the information and entertainment industries. It has taken almost ten years to correct this error. Ironically, the RBOC lobbyists used a gratuitous but effective argument to get into the game. They claimed that unless they became content providers, they could not justify the enormous cost of a new infrastructure (read: fiber). The argument worked. But now some of the telephone companies are forgetting just how specious it was: We don't know what to do with that bandwidth. We are staring at a $60 billion installed telephone plant of copper and fiber that offers enormous untapped opportunity. Worse, the Clinton administration is buying the wholesale need for, and provision of, bandwidth to maintain a major competitive edge without recognizing what Mother Nature and commercial imperatives already provide. More bits per second is not an intrinsic good. In fact, more bandwidth can have the deleterious effect of swamping people and of allowing machines at the periphery to be dumb.

Two Paper Cups and a String


I am fond of using the example of a wink as a form of massive data compression in human-tohuman communication between intimate friends. In effect, this is one bit transmitted through the ether that could require at least 100,000 bits to explain to a third person. At that compression ratio we could transmit more than ten channels of NTSC television over a 300-baud modem. There is a tendency to think of the trade-off between bandwidth and intelligence as merely a matter of computer cycles in the transceiver. But the transceiver should also contain knowledge of the signal. A simple example: Store all the static video information from, say, 50 movies on a CD-ROM (by itself a useless disc) then later, on demand, use ISDN to squirt 64 Kbits into this memory to reconstitute any one of these movies by delivering only the motion or other inbetweening data.

Nature's Role in Copper Versus Fiber


Few people know how good copper twisted pair is. Asymmetrical Digital Subscriber Loop (ADSL-1) can provide 1.544 Mbits per second into, and 64 Kbits per second out of, 75 percent of American and 80 percent of Canadian homes. ADSL-2 runs above 3 Mbits per second and ADSL-3, above 6 Mbits per second. ADSL-1 is fine for VCR-quality video. Which would you prefer: 500 channels from which you can choose one, or one channel that can be switched to any source on the network? It is absolutely true that fiber delivers thousands, in fact, millions of times more bandwidth. Frankly, we don't really know the limits of fiber. In addition, fiber now costs less than copper when lines are updated, fiber will be used, with or without a need for bandwidth. Therefore,
http://www.media.mit.edu/~nicholas/Wired/WIRED1-03.html (2 of 3) [28-4-2001 14:11:09]

WIRED 1.03 - Debunking Bandwidth: From Shop Talk to Small Talk

fiber will come into being automatically through the forces of common sense and Mother Nature.

Is That Soon Enough?


Dates like the year 2005 or 2010 are frequently heard estimates of when fiber will pervade the world, given appropriate investments and incentives. However, without any new incentives, telephone companies update three to five percent of their existing infrastructures each year. Some cable companies are proposing updating 80 percent of their plant in less than five years. But here is the punch line: Why are we worrying about billions of bits per second into the home when we haven't used 1.5 to 6 million bits per second creatively? Yes, I will need those billions when I watch holographic television or expect a can of spinach to be teleported into my home. But in the meantime? Dear telephone companies, now that your argument prevailed, please take advantage of your installed base of copper twisted pair, which can provide so much more than you are telling people - including video on demand, which is really in demand. Next Issue: The Small Vision of the Set-top Box [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.03 June 1993.]

http://www.media.mit.edu/~nicholas/Wired/WIRED1-03.html (3 of 3) [28-4-2001 14:11:09]

WIRED 1.02 - The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?

NEGROPONTE

Message: 2 Date: 4.1.93 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?

The FCC has decided to give television broadcasters 6 MHz of additional spectrum for HDTV on the condition that currently used spectrum is returned within fifteen years. It is a foregone conclusion, thank goodness, that HDTV will be digital, and will probably operate at 20 million bits per second. Now, imagine that you own a TV station and the FCC just gave you a 20 million bits-per-second license. You have just been given permission to become a local epicenter in the bit radiation business. What would you do with your license? Face it, the very last thing you would do is broadcast HDTV - if only because the programs would be scarce and the receivers few. Anyway, as I hope I made clear in the last issue, television's DNA is not connected to picture resolution. So this is what you might do: First, with a little cunning, you'd probably realize that you could broadcast four channels of digital, broadcast-quality, standard NTSC television, thereby increasing your audience share and advertising revenue. Upon further reflection, you might decide to transmit three TV channels, two digital radio signals, a news data channel, and a paging service. It continues. At night, when few people are watching TV, you might use most of your license to spew bits into the ether for delivery of personalized newspapers to be printed in people's homes. Or, on Saturday, you might decide that resolution counts (say, for a football game) and devote 15 million of your 20 million bits to high-definition transmission. Literally, you will be your own FCC for those 20 million bits, allocating them as and when you see fit. That is, if the Bit Police don't stop you. To be perfectly clear, this is not what the FCC originally had in mind when it allocated HDTV spectrum among existing broadcasters. The body politic, particularly groups hankering for spectrum, will scream bloody murder when it realizes that TV stations just had their current
http://www.media.mit.edu/~nicholas/Wired/WIRED1-02.html (1 of 3) [28-4-2001 14:11:12]

WIRED 1.02 - The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?

broadcast capacity increased by 400 percent, at no cost, for the next fifteen years! Does that mean we should send in the Bit Police to make sure that this new spectrum is used only for HDTV?

The Model is the Medium


What will happen in television over the next five years is so phenomenal that it's difficult to comprehend. On the one hand it is easy to state: We are in the process of leaving an analog world and entering a digital one. For example, we once thought that audio, video, and data were different and discrete types of communication, but now we see them converging. They are all bits. In the near future, bits will be assigned to a particular medium by the broadcaster at the point of transmission. This is usually what people mean when they talk about digital convergence or bit radiation. But in the more distant future, bits won't be confined to any medium, as such, but will in-stead constitute a digital model that is transcoded into audio, video, or print by an intelligent receiver. Currently, we allocate spectrum to TV, radio, and various applications, in part because the highways of the sky required (we thought) well-marked and impenetrable median strips. One could easily determine, in advance, what would be found in the spectrum: voice, data, video, and so on. But soon that will be gone - consider a phone company, which has no idea whether you are passing voice, e-mail, or fax over its wires. The next step, which assumes an intelligent receiver, gets even more complicated. Consider the possibility of a digital model of weather information. Your receiver, no less a computer than a TV, will digest and process a broadcast of that model (purists are welcome to call the model the medium). It will convert the model to sound or image, hard copy or soft copy, in greater or less detail, at your discretion (or its own inference). This is to say that the output is determined after the fact. By you. This truly is data broadcasting, and beyond regulatory control. Probably most readers assumed that my mention of a Bit Police was synonymous with content censorship. Not so! The consumer will censor by telling the receiver what bits to select. The Bit Police will want to control the medium itself, which makes no sense whatsoever. The problem, strictly political, is that the new allocation looks like a handout. While the FCC had no intention of creating a windfall, minority and special interest groups will raise hell because the bandwidth rich are getting richer. While there will be a fuss and some regulatory legislation, in the end all bits will be deregulated.

WIRED in a Wireless and Multimedia World


Take an example: this magazine. WIRED, like most magazines, is in a purely digital form
http://www.media.mit.edu/~nicholas/Wired/WIRED1-02.html (2 of 3) [28-4-2001 14:11:12]

WIRED 1.02 - The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?

during its creation. The text is in computer-readable form. The images are scanned and the layout produced on a desktop publishing system. The style of WIRED's creation is the epitome of both a digital process and a digital lifestyle (my contributions, for example, are destined to be written from the seat of an airplane and sent to WIRED via e-mail). Only when the final pages are output to film for printing does the digital representation vanish. Let's pretend that instead of providing WIRED in hard copy, we could transmit it in bits. The subscriber could transcode them into print form or more interactive soft copy. We would create a very different magazine - among other things, we would provide varying levels of detail, our cutting room floor would be empty, and the magazine (if we still use the word) would be conversational. The message is that all information providers will be in a common business - the bit radiation business - not radio, TV, magazines, or newspapers. I do not believe there will be a Bit Police. The FCC is too smart. Its mandate is to see advanced information and entertainment service proliferate in the public interest. There is simply no way to limit the freedom of bit radiation any more than the Romans could stop Christianity, even though a few brave and early data broadcasters will be eaten by the Washington lions in the process. In the last issue, my e-mail address was listed as Negroponte@Internet. That bogus address was a misjudgment, meant to leave the impression that most of my communications with WIRED are by e-mail, which is true. The above address is real. That does not mean I will answer all fan or hate mail, but at least I will see it. Next Issue: Debunking Bandwidth [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.02 April 1993.]

http://www.media.mit.edu/~nicholas/Wired/WIRED1-02.html (3 of 3) [28-4-2001 14:11:12]

WIRED 1.01 - HDTV: What's wrong with this picture?

NEGROPONTE

Message: 1 Date: 1.1.93 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

HDTV: What's wrong with this picture?

High Definition television is clearly irrelevant.


When you look at television, ask yourself: What's wrong with it? Picture resolution? Of course not. What's wrong is the programming. Why is this aspect of the big picture so unclear?

Showgun
During the late sixties, a few visionary Japanese asked themselves what the next evolutionary step in television would be. They reached a very logical conclusion: higher resolution. They postulated that the move from black-and-white to color would be followed by filmic-quality TV, which in turn would be followed by 3-D TV. They proceeded, in their inimitable style, to develop something called Hi-Vision by scaling up TV as we know it in the analog domain. Around 1986, Europe awoke to the prospect of Japanese dominance of a new generation of television. For totally protectionist reasons, Europe developed its own analog HDTV system, HD-MAC, making it impossible for Hi-Vision, which the United States officially backed at the time, to become a world standard. More recently, the US, like a sleeping giant, awoke from its cryogenic state of mind and attacked the HDTV problem with the same analog abandon as the rest of the world. However, this awakening occurred at a time when it was possible to think about television in the digital domain. The perseverance of a few has resulted in our nation being the sole official proponent of a purely digital process. That's the good news. The bad news is we blew it. We made the same mistake as Japan and Europe when we decided to root our thinking in high definition. Despite a great deal of hand waving, the truth is that all these systems (currently under consideration for a national standard by the Federal Communications Commission - which President Clinton could then change) were constructed on the premise that achieving increased image quality is the relevant course to be pursuing.

http://www.media.mit.edu/~nicholas/Wired/WIRED1-01.html (1 of 3) [28-4-2001 14:11:14]

WIRED 1.01 - HDTV: What's wrong with this picture?

This is not the case, and there is no proof to support the premise.

Prime Time Is My Time


What is needed is innovation in programming, new kinds of delivery, and personalization of content. All of this can be derived from being digital. The six-o'clock news can be not only delivered when you want it, but it also can be edited for you and randomly accessed by you. If the viewer wants an old Humphrey Bogart movie at 8:17 pm, the telephone company will provide it over its twisted-pair copper lines. Eventually, when you watch a baseball game, you will be able to do so from any seat in the stadium or, for that matter, from the perspective of the baseball. That would be a big change. As intelligence in the television system moves from the transmitter to the receiver, the difference between a TV and a personal computer will become negligible. It can be argued that today's TV set is, per cubic inch, the dumbest appliance in your home. As the television's intelligence increases, it will begin to select video and receive signals in "unreal time." For instance, an hour's worth of video - based on a consumer's profile or request - could be delivered over fiber to an intelligent TV in less than five seconds. All personal computer vendors are adding video capabilities, thereby creating the de facto TV set of the future. While this view is widely respected, it is not yet accepted worldwide.

Reckless Nationalism
TV is so bound in culture that even some very democratic countries legislate the number of hours that foreign programming is allowed on their domestic channels. Less democratic nations use TV for propaganda and control. This blending of the cultural with the potentially political has crept into the technical arena and, for a variety of gratuitous economic reasons, we are presented with the likely nightmare that Japan, Europe and the United States will go in totally different directions vis-a-vis TV. However, my bet is that 1993 will be the year these diverging courses correct themselves and converge. Europe, Japan and the US will collaborate, and being digital will be recognized, finally, as a truly evolutionary step. Why am I optimistic after outlining such gloomy polemics? For several reasons, all relating to one question: Where is the action? Nintendo, Sega, Apple, and IBM - not your run-of-the-mill TV makers - will present us with a burst of multimedia products in the home very soon. At least 200,000 direct broadcast satellite receivers, fully digital, will hit the stores in time for Christmas. And cable operators are trying to get digital TV even sooner than that. Namely, there will be an outpouring of digital video services that have absolutely nothing to do with HDTV, and they will be in place long before action can be taken on any FCC decision if, in fact, one is made. Finally, a small band of multinational people are making great progress in the standards arena. The roots of digital/video harmony reside in the Motion Picture Experts Group, MPEG, which is a bona fide part of ISO, the International Standards Organization.
http://www.media.mit.edu/~nicholas/Wired/WIRED1-01.html (2 of 3) [28-4-2001 14:11:14]

WIRED 1.01 - HDTV: What's wrong with this picture?

As Scalable as the US Constitution


The biggest reason to be optimistic is that the digital world carries with it a great deal of tolerance for change. We will not be stuck with NTSC, PAL, and SECAM, but we will command a bit stream that can be easily translated from one format to another, scaled from one resolution to another, transcoded from one frame rate to another - independent of aspect ratio. Digital signals will carry information about themselves and tell your intelligent TV what to do with them. If your TV does not speak a particular dialect, you may have to visit your local bookstore and buy a digital decoder, just like you buy software for your PC today. Being digital is a license to grow. The manner in which memory and features are added to your PC or organizer will be the same for your TV. When people argue over the number of scan lines, the frame rate, or the aspect ratio of television in the future, one can rest assured they are discussing the most irrelevant pieces of the puzzle. What they should be talking about are the consequences of being digital and the enormous changes that will affect the delivery of information and entertainment. Namely, the future of video is no different from that of audio or data; it will be nothing but a bit stream. Next Issue: Will There Be a Bit Police? [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.01 January 1993.]

http://www.media.mit.edu/~nicholas/Wired/WIRED1-01.html (3 of 3) [28-4-2001 14:11:14]

WIRED 4.05 - Caught Browsing Again

NEGROPONTE

Message: 35 Date: 5.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Caught Browsing Again

Browsing is an obvious idea, but it is not necessarily the right one. Too much of the Net's future is staked on this unchallenged notion. And the sooner we stop relying on this concept, the better. Just think: How much browsing do you do in real life, or, as John Perry Barlow would say, in "meatspace"? Most working adults don't have time to spare. Browsing is better suited to the confines of a doctor's waiting room, an airplane seat, or a rainy Sunday afternoon. Rarely does browsing suggest the serious, productive use of one's time. Rather, it suggests another era, when work, home life, and vacations were less entwined than they are today. So what happened? Why did we suddenly elevate this faulty, serendipitous, and almost haphazard process to its current prominence - even predominance - on the Internet? The verb browse is derived from the behavior of hungry animals who, in winter when pasture is barren, forage for tender shoots and the buds of trees and bushes. This implies that there isn't a lot to choose from and that what is good needs to be actively sought out. But browsing takes time - the one thing most of us don't have. For example, I do far less window-shopping than I did when I was young (and yes, I miss it). Undeniably, browsing can be fun and useful but, as with tourism, only so much and so often. Funny how we use the words cruising and surfing to describe our behavior on the Web. How often do we invoke the words learning or engaging when we browse? The Web is a digital landmark, as important as the Net itself. Its inventors, Tim Berners-Lee and his colleagues, will probably never fully realize how important their contributions were, and will continue to be, because the Web can be viewed in so many different ways. For me, it's less about multimedia or hyperlinks and more about turning the Net inside out. Instead of sending email to an individual - or to a list of individuals - I can now post a message and invite people to look it over. Sure, we've always had bulletin boards, telnet, and ftp on the Net, but the Web created a new and more accessible subworld, one more like the window-shopping experience than the original message-passing rubric. And in a way, that's a shame. Think of the change this way: the Internet is now like a city - people go places, visit
http://www.media.mit.edu/~nicholas/Wired/WIRED4-05.html (1 of 3) [28-4-2001 14:11:17]

WIRED 4.05 - Caught Browsing Again

communities. In fact, we even call our own pages "home." But when we arrive at a place and try to make things happen, we often end up frustrated.

Direct manipulation doesn't work


After years of work to make computers more accessible, researchers began questioning whether people really wanted ease of use in the first place. Using a computer is often just a means to an end, so wouldn't that end be better served if we could delegate tedious computer time to someone - or something - else? As far back as 25 years ago, Alan Kay and others suggested that "delegation" was a much better metaphor than "direct manipulation" when focusing on people's productive use of computers. Later, at Apple - the font of many human-interface advances - delegation became the challenge and "interface agents" became the solution, at first in name only. Instead of constructing a computer that is easy to manipulate directly, the argument goes, why not fashion it after a well-trained English butler who knows you so well that he will do almost everything on your behalf. He will do what you ask and, in some cases, you won't even have to ask. Most important, the idea of an interface agent entails its ability to "understand what I mean." Please just do it and don't bother me. When it comes to interaction with such an agent, less is more. Let it do the searching, surfing, and cruising. Let it browse for you and bring you the fruits of its labors.

The net population of the Net


In America today, the demographics of personal-computer use is oddly bimodal. Most kids have some access. But, surprisingly, the next largest and fastest growing group (as a percentage of the age group per capita) is those age 55 and older (more than 30 percent of whom own a personal computer). Between these two groups, we have what I call the "digital homeless": those who arrived on the planet a little too early, or not early enough, to have the time to explore the possibilities of being digital. Many people in this group feel that the online world has nothing to do with them and (according to a report in Business Week) value their hair dryer more than a personal computer. Among those who are digital, particularly the young and the old, we find a majority of people with free or flexible time, people who can literally afford to spend their time browsing. I cannot. I need to delegate that process. And I'm not alone. By 2000, we can expect a billion users on the Net. As recently as a year ago, this number seemed outrageously high. Today it is considered a conservative estimate. What this statistic fails to include is the huge number of machines and software programs that will use the Net on our (and their own) behalf. At the turn of the millennium, we're likely to find those billion human users joined by a much larger number of software agents, Web crawlers, and other computer programs that will do the browsing for us. The Net will be roamed mostly by programs, not
http://www.media.mit.edu/~nicholas/Wired/WIRED4-05.html (2 of 3) [28-4-2001 14:11:17]

WIRED 4.05 - Caught Browsing Again

people. When people do use the Net, it will be for more suitable purposes: communicating, learning, experiencing. The idea that machines, not people, will dominate Net usage turns the model upside down, not just inside out. Suddenly "pages," if that's even an appropriate term, will need more and more computer-readable hooks so that programs can see what you or I view from the corner of our eye. When we browse, our eyes gravitate toward images - in the future, these images will need simple digital captions. This will certainly take steam out of the Net-based advertising we know today. Simply put, our eyeballs may not be there to see it. Next Issue: Who Will the Next Billion Users Be? [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.05 May 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-05.html (3 of 3) [28-4-2001 14:11:17]

WIRED 4.06 - Who Will the Next Billion Users Be?

NEGROPONTE

Message: 36 Date: 6.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Who Will the Next Billion Users Be?

The question I'm asked most often is "Will the information rich get richer while the information poor get poorer?" My answer is "No." But that reply may be too quick and simple. If you agree that the Net will have a billion users by the turn of the century, you probably have also assumed that the majority of these users will be in developed nations. After all, of the roughly 10 million host machines that exist today, more than half are in the United States. Many of the rest are in G7 nations. In fact, the 50 least-developed countries of the world - those with less than US$500 per capita GDP - currently sport 23 host machines. (Curiously, 19 are in Nepal.) My point is that the information rich today are indeed rich and the information poor are indeed poor. But this will change. Consider a country like Malaysia, where the people value education, and the government, albeit slightly despotic, promotes development in a grand way. At the moment, there are 20,000 Internet users in Kuala Lumpur, a number that is growing by 20 percent each month. At this rate, all of Malaysia (some 19.1 million people) will be online by 2000. So far, we haven't been counting these people in our billion users calculation. And Malaysia is not the only country growing at this rate.

Gung-hoism
Consider the most gung-ho person in your neighborhood, the one who enthusiastically embraces trash collection, babysitting, and a host of other local civic projects. The neighbor who comes to mind is probably the newest arrival. Said another way, the most devout among us are frequently those who have most recently converted. We're all familiar with new email users who go berserk and swamp us with interminably long and chatty messages. This can happen on a global scale and is something to ponder when you realize that India and China represent more than 2 billion people. But the difference between using computers for email and, for example, primary education is that the former may be an infatuation while the latter can provide an everlasting square meal of digital nutrition. In general I'm very optimistic, especially about the developing world rapidly "becoming digital."
http://www.media.mit.edu/~nicholas/Wired/WIRED4-06.html (1 of 3) [28-4-2001 14:11:19]

WIRED 4.06 - Who Will the Next Billion Users Be?

Almost half of the populations of developing nations are under 20, in contrast to less than a third in developed countries. Typically, this youth corps is considered a liability. But given the existing base of people, a large youth population is an asset as nations move forward, particularly in countries where older members of society are less literate. We all know that kids take to computers as they do to language, and that given the chance, they will jump into the digital world with passion, delight, and abandon. When PCs were only "personal computers," educational opportunities - especially in the developing world - were limited by the amount of software "second guessed" to be appropriate. With the Internet, this changes dramatically. It's no longer necessary to plot every step in advance. Kids can teach other kids around the world. Reasons for being able to read and write will become obvious.

Two fixable problems


You're probably saying, Sure, Nicholas, we love the idea of developing countries jumping into the digital age, but what about the problems of communications and cost? In developing countries the telephone systems are not just dilapidated, scarce, and poorly run, they're also outrageously expensive monopolies that, in almost every instance, are state owned. It is often difficult to tell if a lousy system is a result of a shabby infrastructure, an inefficient - even corrupt - civil service, or both. For these reasons, it would be great to pull away from such earthly flaws and use a grid of loworbiting satellites - like Iridium or Teledesic - to link the schools of the developing world. At least it is possible - without digging up Africa or managing 100 different phone companies. Access to low-cost computers seems more difficult. As we press machines into harder duty and make them more sophisticated, we sometimes forget that for some people simple equipment is much better than none at all. A 386 laptop - perfectly serviceable for Net connections, wordprocessing, and graphics - can be built today for under $250. That's important. And backing away from hardware expectations is not the only issue; trimming our operating systems is even more vital. Windows 95 makes no sense in most of Africa. A svelte, stripped-down version is needed so that memory demand, among other things, is modest. When that happens, the next billion users may not be composed of our digitally homeless middle-class relatives; rather, a totally new group of young, eager minds from "elsewhere" may emerge.

A call for a school corps


We now need at least 500,000 young men and women from developed nations who are willing to spend a year in the developing world as part of a school corps - like the Peace Corps. These young people would be a resource for more than 100 children each (a conservative figure) within the 48 countries considered by Unesco as the "least developed." Universities would be wise to support such an initiative by offering academic credit for a new kind of junior year abroad. Believe me, most students would gain far more from teaching 6-year-olds in Africa than in the classroom at home.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-06.html (2 of 3) [28-4-2001 14:11:19]

WIRED 4.06 - Who Will the Next Billion Users Be?

Running such an effort would cost about as much as a few F-15s. The problem is not money, but how to do it. Under whose aegis? Unesco is too politicized, and the World Bank would want its money back. It may be time to create a new United Nations for cyberspace, an organization with a five-year half-life to make the digital world immediately available to everyone. It cannot be done country by country - governments move so slowly, and most are run by the digitally homeless, anyway. Something very new is needed. If you have a good idea, speak up. Use the email address above. Seriously. Next Issue: Object-Oriented Television [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.06 June 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-06.html (3 of 3) [28-4-2001 14:11:19]

WIRED 4.07 - Object-Oriented Television

NEGROPONTE

Message: 37 Date: 7.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Object-Oriented Television

The Media Lab's Michael Bove believes that a television set should be more like a movie set. But movies require locations, actors, budgets, scripts, producers, and directors. What would it mean, Bove wonders, if your TV worked with sets instead of scan lines?

Sets and actors


For too long, TV has taken its lead from photography, which collapses the three-dimensional world onto a plane. Except for the image-sensing mechanism attached to the back, today's TV camera is very similar to a Renaissance camera obscura. This long-standing construct is perhaps the wrong way to think about television. Maybe there is a way to capture the scene as a collection of objects moving in three dimensions versus capturing a single viewpoint on the scene. Think of it as a computer graphics process, more like Toy Story than Seinfeld. The networked virtual reality language VRML has such a model behind it. But it's difficult to author good virtual worlds from thin air, so there aren't any out there on the Web that are as funny as Seinfeld or as appealing to the public as college basketball. What we need is "real virtuality," the ability to point a computer or camera at something and later look at it from any point of view. This is particularly important to Hollywood, because most of the cost of a movie is in front of the camera, not behind it. Object-oriented television should cost less both in front and behind, and not look cartoonlike. It will still involve cameras, but instead of giving the postproduction people (or the viewers of an interactive program) a switch that flips between cameras one and two, these cameras will contribute what they observe to a database from which any viewpoint can be constructed. Similarly, TV sound should be object-oriented. Instead of left and right channels, sound can be represented as individual sound sources in an acoustically modeled space so that on playback we can resynthesize the speaker to correspond with the arrangement of things on the screen and the viewer's path through them.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-07.html (1 of 3) [28-4-2001 14:11:21]

WIRED 4.07 - Object-Oriented Television

The bit budget


TV is a bandwidth pig. Ten years ago, a common assumption was that 45 million bits per second were needed to obtain studio-quality television. Today, that level of performance is possible at 4 million bps. That's quite an improvement, but compared with the 29,000 bps you get when connecting to the Internet (if you're lucky), we still have a long way to go. There is one fundamental reason for this profligate use of bandwidth. TV receivers are dumb in particular, they are forgetful. On a per-cubic-inch basis, your microwave oven may be smarter. A TV set is spoon-fed pixels - line by line, frame by frame. Even if you compress them by taking out the enormous redundancy that occurs within and between frames and by taking advantage of the characteristics of human vision, video as we know it still uses many more bits than a computer graphics database capable of synthesizing the same images. Inefficiency also results from a lack of memory. Your TV doesn't remember that the set of the local news changes only about once every three years, it doesn't remember the architecture of sports arenas, and it doesn't remember the Steve Forbes commercials seen six times each hour by those of us living in states holding early primaries. The digital TV sets about to hit the market are able to do a lot more multiplications per second than your microwave oven, but they still aren't "clever." They decode a closed-form standard, known as MPEG-2 (derived from the Motion Picture Experts Group). MPEG-2 may be among the last standards for which anyone bothers to develop a dedicated chip. Why? Because a single data standard for digital video, one that is always best, just does not exist. We need a flexible decoder capable of interpreting whatever the originator (or an automatic process) decides is the best way to encode a given scene. For example, it would be more efficient (and legible!) to transmit the fine print during car-lease commercials as PostScript (a common standard for typography and printers) instead of MPEG. Your TV's decoding capabilities might be updated as often as your PC's Web browser is now. Perhaps TV viewers in the next decade will eagerly look forward to September as the month when the next season's algorithms get downloaded.

Storytelling
Having actors and sets hang around in our TVs isn't going to do us a lot of good unless we can tell them to do something interesting. So, in addition to objects, we need a script that tells the receiver what to do with the objects in order to tell a story. TV conceived as objects and scripts can be very responsive. Consider hyperlinked TV, in which touching an athlete produces relevant statistics, or touching an actor reveals that his necktie is on sale this week. Bits that contain more information about pixels than their color - that tell them how to behave and where to look for further instruction - can be embedded.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-07.html (2 of 3) [28-4-2001 14:11:21]

WIRED 4.07 - Object-Oriented Television

These bits-about-the-bits will resolve a problem that has beleaguered Hollywood directors faced with one-version-fits-all screens and made them envious of graphic designers, who can design postage stamps, magazine ads, and highway billboards using different rules of visual organization. Television programs could react according to the originator's intention when viewed under different circumstances (for instance, more close-ups and cuts on a small screen). You think Java is important - wait until we have a similar language for storytelling. TV is, after all, an entertainment medium. Its technology will be judged by the richness of the connection between creator and viewer. As Bran Ferren of Disney has said, "We need dialog lines, not scan lines." This article was co-authored by V. Michael Bove (vmb@media.mit.edu), Alexander Dreyfoos Career Development professor at MIT's Media Lab. Next Issue: Building Better Backchannels [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.07 July 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-07.html (3 of 3) [28-4-2001 14:11:21]

WIRED 4.08 - Building Better Backchannels

NEGROPONTE

Message: 38 Date: 8.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:


Where am I? Are the lights on? Who's looking at me? Am I indoors or outdoors? What is all that noise - or is there any?

Building Better Backchannels

Try the following: Close your eyes and plug your ears. Imagine you are your own personal computer. Try it. You can't see you, you can't hear you - you just get poked at now and again. Not great, is it? No matter how helpful you want to be to you, it's tough going. When deprived of knowing what's happening around you, all the intelligence in the world won't make you into a faithful servant. It would be frustrating for you to be your own personal computer. The backchannels are far too limiting. Two decades ago, computers were totally sensory deprived, even in the direction of computer to user. Today, the flow of information from computers to people offers a richer experience, with color, motion, and sound. However, the opposite path - from people to computers - enjoys no such amelioration. Computer inputting today is almost as tedious as it was 20 years ago. In this sense, the interface viewed as a whole has become wildly asymmetrical - lots out, little in. Your personal computer doesn't have the recognition capability of a parrot.

Making computers listen


The primary channel of communication between computers and users during the next millennium will be speech - people talking to computers and computers talking back. Yet statements I made in 1975 to the effect that speech would be the dominant interface in 20 years haven't come true. What went wrong? Simple: we became lazy; corporate users did not complain. We also underestimated the speed at which computers would become popular with consumers. Remember when Ken Olsen, founder - and at the time CEO - of Digital, said that he saw no earthly reason to have a
http://www.media.mit.edu/~nicholas/Wired/WIRED4-08.html (1 of 3) [28-4-2001 14:11:23]

WIRED 4.08 - Building Better Backchannels

computer at home? That was in 1977. Given that attitude, many computer corporations sat on their digital butts, enjoying a marketplace of corporate purchasing agents - people who bought computers for others to accomplish the tasks outlined in their job descriptions. Under those conditions, users were expected to suffer the indignity of using a computer and to dutifully work around, among other things, its hearing impediment. Now, suddenly, consumers like you and I are buying more than 50 percent of all PCs to use in our homes, to assist our children, and to entertain. Under these new conditions, a deaf (and dumb) computer is not acceptable. Change will occur only when manufacturers start taking the word personal in personal computers seriously. By this I mean building speaker-dependent voice recognition (which is so much easier than speaker-independent recognition). Also, manufacturers must focus on highly interactive speech, not transcription, which even humans cannot do properly. For those readers who think life will become terribly cacophonous in the presence of machines that talk and listen, let me say that we seem to work just fine with telephone handsets in our homes and offices. And for those of you who feel it is plumb silly to talk to an appliance, recall for a moment how you felt about answering machines not too long ago. No, speech really is the right channel, and it is time, once and for all, to move with resolve.

Pin the tail on the donkey


Imagine computer eyes all over the place - not just stereo, but holovision. This new vision system will have cameras anywhere and everywhere, not just on the computer's front or sides. Computers can leave their eyes festooned everyplace. In earlier Wired issues I have commented on the coincidence that PC-based teleconferencing systems employ a camera above the display and a speaker below it, resulting in a unit that could serve equally well as a computer's eye and ear. Such a configuration can look and know you are there. It can know if you are smiling. That kind of "seeing the user" is important, because today's computers cannot even detect your presence - something a common toilet can do. But let's go a step further. It is not just a matter of removing the computers' blindfolds, but of giving them a new kind of vision, the ability to look into each room in your house, the oven, the closets, the garden - even the traffic on your street. Furthermore, these eyes need not be like ours. They should be able to look at infrared and ultraviolet, like bats and radar. The value of looking at nonvisible light includes such examples as night vision and recognizing a smile by the minuscule change in heat that occurs at the corners of our mouth.

http://www.media.mit.edu/~nicholas/Wired/WIRED4-08.html (2 of 3) [28-4-2001 14:11:23]

WIRED 4.08 - Building Better Backchannels

The user-near field


When people relate to one another, they are not simply in one of two states - far away or touching. There is an important near field in human communication. Perhaps a nod occurs before a handshake, or a smile before a kiss. We enjoy a gray tone in our proximities with each other. Computers have none of this. Either you are there (touching them) or you are not. Recall the deaf-mute PC exercise at the beginning of my rant. Now imagine that each communication with a user is like someone sneaking up behind you and yelling "Boo!" At this point you may be rolling your eyes (versus closing them), saying, "Nicholas, you've finally lost it." But think for a moment. How would you function if every interaction were limited to being there or not being there, no forewarning, no near field? Backchannels are crucial. It is not simply a matter of making the interface more symmetrical, with as much in as out. It is a matter of including very tightly coupled signals of understanding and appreciation. Without these, talking to a computer will remain as fulfilling as talking to a lamppost. Next Issue: The Future of Telephone Companies [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.08 August 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-08.html (3 of 3) [28-4-2001 14:11:23]

WIRED 4.09 - The Future of Phone Companies

NEGROPONTE

Message: 39 Date: 9.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

The Future of Phone Companies

Shipping bits will be a crummy business. Transporting voice will be even worse. By 2020, there will be so many broadband paths into and out of your home that competition will render bandwidth a commodity of the worst kind, with no margins and no real basis for charging anything. Fiber, satellites (both stationary and orbiting), and all sorts of terrestrial wireless systems will pour bits galore into your home. Each channel will have so much spare capacity that measuring available bandwidth will make as much sense as counting photons passing through a window. Scarcity creates value. Since fiber (including transducers) now costs less than copper (except for the shortest lengths), we will be installing fiber even if we do not need the bandwidth it provides. POTS, plain old telephone service, is better served and more inexpensively installed and maintained using fiber. Japan will have it in every home by 2015. There will be such a glut of bit-transportation capacity that vendors will be giving it away to get you to buy something or just to look at advertising. And we will soon be exchanging bits among ourselves that represent almost anything but real-time voice traffic.

Voiceless telephony
Today, the telephone companies take the phone in their name far too seriously. For example, they worry about Internet-based telephony without realizing that their real problem will be the reduction of real-time voice traffic in the digital age. Our great-grandchildren will be astonished and amused when they recall the waste and financial loss incurred at the end of the 20th century playing telephone tag. Their telecommunications world will be far more asynchronous than ours and will be based mostly in ASCII, not in audio or graphic renditions of it. "Hello?" The word is with us thanks to the telephone. Early telephone operators were called hello girls. While we have no hello girls today asking, "Are you finished?" we still use hello far too often. In fact, you never really want to say hello all by itself on the telephone. It is fine for face-to-face greetings, but said on the phone, it means you don't know who is calling, or why they are calling in the first place. That makes no sense. Your digital butler should say hello, not you.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-09.html (1 of 3) [28-4-2001 14:11:25]

WIRED 4.09 - The Future of Phone Companies

Furthermore, why call at all? Sure, it may be important for many purposes, often for emotive reasons. Yet consider the alternatives now available. Federal Express's Web site is a nice example. Until recently, I would call an 800 number to ask a human if the 10-digit domestic or 12-digit foreign waybill number could be traced, then I would hear typing in the background. Now, I click a few times on the company's Web site and am much more satisfied with the quick, direct reply. The Relais & Chateaux hotels have been on the Web for more than a year and a half, so I have stopped calling them. Just think: all of these transactions and many more once required phone calls. In fact, this extends to people. If your circle of acquaintances are online, you call them much less. In my own case, I place less than five calls a day and receive as few. With my mother online, we call each other less but communicate almost daily.

Don't sell your phone stock yet


While I may not expect to pay anyone for moving my bits, I am prepared to pay handsomely for value added to them. By this I mean any of the following: filtering, prioritizing, sanitizing, authenticating, encrypting, storing, translating, or personalizing, to name a few. My colleagues will argue about where such value should be added; in the network or at the periphery? As an extreme decentralist, I will argue that as much as possible should be done at the periphery. But then, I look at it this way; if I am on the periphery, the "center" looks exactly the same to me as others on the periphery. For example, I can perform operations locally or remotely. Once I go remote, there is no difference - the network, switches, servers, and other's personal computers all look the same. They are the "elsewhere." This is important because whoever has the pulse of the network may be in the best "elsewhere" position, as more and more gets pushed into the network for one reason or another. Today, socalled network computers are being advocated to lower the cost at the periphery. Video-ondemand is attractive just to be rid of the VCR and the clutter of videocassettes. I would instantly push my alarm clock into the network, where it could have access to weather, airline delays, traffic reports, snow cancellations, and any other kind of information that could affect the time I should get up. Yes, I would pay the phone company some money to wake me up, and a lot more not to wake me up, when possible. That's added value.

Mouse potatoes
I truly believe that during prime time in 2005, more Americans will be on the Net than will watch network television. NBC, CBS, ABC, Fox, and CNN could by then be doing more business on the Web than via broadcast. Under these conditions, a telephone company stands to profit handsomely. And it does not have to own content - a common belief just five years ago. CNN does not want to personalize the news. It has enough trouble gathering it from around the world - and you don't necessarily want to limit your input solely to theirs. One hundred million
http://www.media.mit.edu/~nicholas/Wired/WIRED4-09.html (2 of 3) [28-4-2001 14:11:25]

WIRED 4.09 - The Future of Phone Companies

news-reading and news-watching Americans will soon realize the possibilities that can be derived from looking at 100 million different editions of the news - something the phone company could make possible. In fact, content providers are not well suited to deliver tailored news, as they are per force focused on their own. I bet you would pay your phone company a few dollars a day for a news service, perhaps print in the morning and video in the evening, whose stories combined headline news and items of personal interest. In fact, this could be an ironic example of added value: I would pay my telephone company more to give me fewer bits, but the right bits. Wouldn't you? Next Issue: Electronic Word of Mouth [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.09 September 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-09.html (3 of 3) [28-4-2001 14:11:25]

WIRED 4.10 - Electronic Word of Mouth

NEGROPONTE

Message: 40 Date: 10.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Electronic Word of Mouth

One fine day a young woman went out to buy a car. The dealer convinced her to purchase a Ford Taurus for US$19,500. She said she needed to sleep on it and would come back the next day. But instead of just sleeping on it, she used the Net to inquire whether there were others, near her, who were also considering buying a Taurus. By next morning, she had found 15 people who were. Some email discussion ensued, and she returned to the dealer to say she would take the car, but for $16,500. This was so far below his price that he assumed she had made a mistake. "No sir. I have not made a mistake," she replied. "I simply failed to mention that I am buying 16 cars, not one." Delighted with the idea of selling in such volume, the dealer promptly sold the cars at her price. A buyers' cartel - as opposed to a sellers' - is almost impossible to create: too many people need to be involved. Meeting with, speaking to, calling, or finding those who may be interested is too difficult, and you probably wouldn't know who to contact anyway. Consumers of products find themselves in a poor position compared with suppliers, people who, by virtue of knowing one another, can fix prices anytime they agree. In addition, suppliers are typically few in number. The consumer's position is weakened because he or she cannot shop efficiently. The potential buyer cannot cover an area that is wide enough to be significant. This is about to change.

Let your fingers do the walking?


In the days before Pattie Maes was a mom, and prior to joining MIT's tenure track, she had plenty of time to browse through stores, newspapers and magazines, even cities, with the hope of discovering some piece of treasure. These days she hasn't the time to explore at a leisurely pace, but even worse, the amount of information and the number of products have expanded almost exponentially. What was merely overload yesterday has become impossible for her today. I must say I have felt this way for years! I don't even refer to the Yellow Pages. Instead, I trust friends, whose recommendations have proven to be on the mark. There is my sister-in-law for movies, Jerry Rubin for restaurants, and
http://www.media.mit.edu/~nicholas/Wired/WIRED4-10.html (1 of 3) [28-4-2001 14:11:27]

WIRED 4.10 - Electronic Word of Mouth

Sherwin Goldman for wine. Like most of us, I have few people to whom I can turn. Otherwise, I rely on critics, experts who provide evaluations. In other words, today we have only two choices: ask a friend or trust an expert. Thanks to the work of Professor Maes and her colleagues, including former students who have started Firefly Network Inc. (formerly Agents Inc.), we now have a third way to find a new film, a hip restaurant, a timely news article, or a hot Web site. The concept is called collaborative filtering - a way to tap into other people's wisdom.

Have it your way


In July 1994, a program called RINGO became the first instantiation of this concept. It helped users find interesting music. Mind you, this is much more difficult than locating the lowestpriced Ford Taurus or the most inexpensive pair of Porsche sunglasses. Cost is easily measured, but "interesting" music is very much in the ears of the beholder. What RINGO did was simple. It gave you 20-some music titles by name, then asked, one by one, whether you liked it, didn't like it, or knew it at all. That initialized the system with a small DNA of your likes and dislikes. Thereafter, when you asked for a recommendation, the program matched your DNA with that of all the others in the system. When RINGO found the best matches, it searched for music you had not heard, then recommended it. Your musical pleasure was almost certain. And, if none of the matches were successful, saying so would perfect your string of bits. Next time would be even better. The idea is simple, but the existence of the Net makes it more than cunning. When graduate students first put RINGO online, thousands of users and music titles were on it within a week. The expansive and global nature of the Net takes word of mouth, as we know it in real time and space, and reaches across all time zones and the planet itself. Try it. The current, commercial incarnation of this idea can be found at www.ffly.com/.

Taking the noise out of the Net


Many users, old-timers and newbies alike, complain that the Net is too noisy. There is no easy way to separate the opinions of an 8-year-old from those of The Washington Post (not sure which I would prefer). Remember the old axiom "Garbage in, garbage out"? Well, it is just that old. It's about time people realized that the noise of the Net is its beauty, and that navigating this noise will become easier and easier because of electronic word of mouth. Until recently, the assumption has been that you must have brand recognition. Brand names, we are told, make information credible. If The New York Times says it, or if John Markoff says it, you'd better believe it. Otherwise, beware. Search engines don't discriminate between a high school term paper and Britannica's homepage, let alone the writings of Cliff Stoll and this back page.

http://www.media.mit.edu/~nicholas/Wired/WIRED4-10.html (2 of 3) [28-4-2001 14:11:27]

WIRED 4.10 - Electronic Word of Mouth

Electronic word of mouth does. And it works both ways. It not only allows you to find music titles of obscure ensembles, for example, but it very quickly blackballs the bull. It means that one person's three-star restaurant can be an anathema to another. We have seen just the beginning of a new kind of Consumer Reports - done by consumers, for consumers. For Pattie Maes and company, the ultimate effect of this technology will be demonstrated when, for instance, a band signs with a big label because Firefly generated so much excitement about its music. That is, when a new product will be launched because word-of-mouth technology formed an online cartel of people who want it to be sold. Pattie Maes (pattie@media.mit.edu), a professor at the MIT Media Lab and founding chair of Firefly Network Inc., contributed to this column. Next Issue: The Digital Absence of Localism [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.10 October 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-10.html (3 of 3) [28-4-2001 14:11:27]

WIRED 4.11 - Being Local

NEGROPONTE

Message: 41 Date: 11.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Being Local

John Perry Barlow suggests that cyberspace secede and become a state of its own. Most people don't find this plausible. But think about it for a moment. Just about every conflict in cyberspace can be traced to a single phenomenon - the absence of locality. The Net's envelope is the whole planet. Some governments and their regulators talk about curtaining their nations from the Net, monitoring bitstreams, and banning offensive Web sites - all essentially impossible tasks. Legal control is always local, and this is increasingly so. A country like Switzerland, itself very small, gives its 20 cantons (states) and six half-cantons enormous power. The federal government keeps a low profile, so much so that I defy you to name Switzerland's head of state. In many ways, the United States is similar to Switzerland. Visitors marvel at our liquor laws whereby, state by state and city by city, regulations change. While you may not be able to buy liquor in one town, you may in the next. Decency laws are similar in the range of views they reflect. An important part of the current political debate concerns increasing the control at local levels because, we are told, people are more civic-minded when they believe they will be held accountable and when control lies close to their doorstep.

Everyone's your neighbor


When US Congress passed the Telecommunications Act of 1996, it included an absurd Communications Decency Act that has since been struck down by a three-judge panel in Philadelphia and is awaiting consideration by the Supreme Court. This legislation makes transmitting digital material on abortion illegal and overlaps regulations already in existence. It is interesting to note that even in the world of atoms, the practice has been not to enforce these regulations. My point does not concern a violation of the First Amendment or the impossibility of enforcing such a law, although I believe the act suggests both. It is that legislators made, in essence, a categorical mistake. Cyberspace is not geopolitical. Cyberspace is a topology, not a
http://www.media.mit.edu/~nicholas/Wired/WIRED4-11.html (1 of 3) [28-4-2001 14:11:29]

WIRED 4.11 - Being Local

topography. There are no physical constructs like "beside," "above," "to the north of." This is obvious. But it is not so obvious to the digitally homeless who govern most countries. The tragedy of the CDA is that countries less democratic than ours have already pointed to it and said, "You see, even the Americans think the Net is smut," failing to recognize that the CDA was instantly enjoined. Sovereignty is an odd and maybe useless concept within the digital world. But the real test of sovereignty is not decency. It is money.

Digital cash
Excuse my apparent digression to a treatment of money as yet another issue of bits and atoms. What follows is an incident that caused me to think about digital money in a new way. Two years ago, I was skiing in Klosters, Switzerland. On this occasion, the first ski day of the season, I found that the paper lift ticket had been changed to a smartcard, which, snugly nestled in your pocket, is read as you approach a turnstile - certainly convenient for the mittened skier. Since these smartcards contained electronics, the ski-lift company wanted them back and required a SwF10 deposit (approximately US$8) which can be redeemed at any lift or railroad station. I ended my first day near neither. Instead, I drove to the neighboring town to visit my father in the hospital. On the way, I stopped to buy some chocolates and, while paying for them, reached into my pocket and pulled out a handful of coins, including the smartcard. Without my reading glasses, I squinted at the coins and must have looked like a struggling tourist. The cashier reached over the counter to take the exact change. First she took the smartcard, saying that it was worth 10 francs, followed by the few additional coins she needed. I was stunned. Then I noticed a pile of smartcards on the cash register behind her. "What do you do with these?" I asked. "We pay the baker," she answered. This was too much. I visited the baker, and he had far more of these ski-lift cards, which he said he used to pay for milk, flour, and delivery. Obviously, the lift company must be running out of cards. What does it do? It does what our government does. It prints more. I sure hope the cards cost less than 10 francs! Is this significant? Yes, because nobody cares; that's what is interesting. Nobody cares that these lift cards have become local currency because they are just that - local. This currency moves slowly and is restricted to a small section of a remote valley in eastern Switzerland. Now, turn those atoms into bits. Suddenly locale has no meaning. I have a global currency as long as it's attached to a trusted entity - akin to the lift company - and that entity need not be a country. Most of us would trust GM, IBM, or AT&T currency more readily than that of many developing nations because the "currency" represented by those companies is more likely to remain convertible. After all, a guarantee is only as good as the guarantor.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-11.html (2 of 3) [28-4-2001 14:11:29]

WIRED 4.11 - Being Local

The ski-lift currency moved by virtue of being in my pocket at the right time. As soon as currency becomes bits (dutifully encrypted), its reach is unlimited. In fact, while organizations like the EU struggle to achieve a single currency, cyberspace may develop its own much faster.

A new localism
Neighborhoods, as we have known them, are places. In the digital world, neighborhoods cease to be places and become groups that evolve from shared interests like those found on mailing lists, in newsgroups, or in aliases organized by like-minded people. A family stretched far and wide can become a virtual neighborhood. Each of us will have many kinds of "being local." You can almost hum it. Being local will be determined by: what we think and say, when we work and play, where we earn and pay. Next Issue: Laptop Envy [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.11 November 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-11.html (3 of 3) [28-4-2001 14:11:29]

WIRED 4.12 - Laptop Envy

NEGROPONTE

Message: 42 Date: 12.1.96 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Laptop Envy

Henry Ford would be amazed by today's automobile ads. He'd find no mention of horsepower or acceleration. Instead, he'd find references to seemingly trivial accessories - automatic door locks, dimming mirrors, built-in cup holders, and the like. But he would have little cause for alarm. Form over function is often the path of mature products. It isn't necessary to mention basic features like the engine. Instead, lesser details creep to the foreground and provide character and uniqueness. Shortly, that shift will transform portable computing. My guess is that our children will never see a laptop characterized by the speed of its processor. Already we see that the form factor - the machine's physical shape - is more important than the speed of its microprocessor. My first laptop was a Sony Typecorder, released in early 1980. This svelte machine weighed 3 pounds 4 ounces, ran endlessly on four AA batteries, and offered a full-sized keyboard and built-in tape drive. Its most noticeable drawback was the one-line LCD, but I quickly got used to it. The Typecorder expected a market with journalists. And for this reason, the modem only uploaded - it literally had just one suction cup for attaching to a telephone mouthpiece. Pretty primitive, but, boy, did I get a lot of work done with it. I suddenly found use for the interstices of life, the little wasted times, where I might otherwise nod off, doodle, or daydream. In fact, I used my Typecorder to write almost every proposal to fund the Media Lab's construction. In those days, though, a laptop often became an attractive nuisance. Working on an airplane was difficult, as people interrupted me to ask what the device was. It became easier to use pencil and paper during short flights until I perfected my body language to provide an easily readable Do Not Disturb sign. In 1983, a young Japanese genius, Kay Nishi, designed the next-generation laptop, marketed simultaneously by Olivetti, Tandy, and NEC and first built by third-party Kyocera. These too were lightweight machines with full keyboards, powered by four AA batteries. But Kay's design
http://www.media.mit.edu/~nicholas/Wired/WIRED4-12.html (1 of 3) [28-4-2001 14:11:31]

WIRED 4.12 - Laptop Envy

had an eight-line display. Though none of the models had the same brushed-aluminum elegance of the Typecorder, they were several steps ahead and included support for a full duplex modem. I used my NEC PC801A for almost 10 years before switching to a PowerBook 180, which I still use today. But the evolution of laptops has gone somewhat downhill.

Common PINs
Now when I travel, almost everyone is pecking away at a keyboard. The one-line monochrome message has evolved into a full-color, 12-inch display. That is enormous progress, but at a powerful price. I now carry eight to ten battery packs during long trips. I won't even consider a laptop design that includes unstackable batteries. The fact that most batteries don't indicate their charge state is pathetic. It's as if the designer assumed that the laptop would always be used plugged in, and that people would travel with one spare battery at most. While advising a large Japanese firm on its future laptops during the late 1980s, I discovered that Japanese designers viewed them as movable desktops. Small homes and offices made it necessary to put a machine away and take it out again. They were designing machines that would never see a lap and would fit perfectly into a culture that drew hard lines between home and office, work and play. But portable computers are also for peripatetic, digital people. These are people who need more than a high-octane computer - they need a constant digital presence. Under these conditions, the value of some features suddenly changes. For example, lightness counts, but ruggedness counts more. I have abandoned PC card modems because their connector is too delicate; I prefer shoving the RJ-11 into the back. Today, flight attendants don't ask me what's on my lap; they ask me if it has a CD-ROM - in which case the FAA says I can't use it inflight. I doubt laptops radiate a big enough electrical field to be hazardous, but I'm certainly not going to argue, even if this falls on the ridiculous side of the safety issue. And forbid that laptops should be fully prohibited (as they were for a while on Korean Air). If that happens, there will be something new to envy and market: tempested laptops, the machines the intelligence community uses to avoid radiation leaks (so spooks and counterspooks cannot snoop from a distance).

Real envy
Laptop form factors have approached their limits. Face it - keyboard size is driven by the size of your hands: you don't want your machine to be less than 11 inches wide. The screen probably ought to be about 8 inches tall, hence the machine needs to be 8 inches deep. And, if the machine gets too thin, it will become structurally awkward, if not uncomfortable. In fact, you want a certain amount of weight so it won't slide around.

http://www.media.mit.edu/~nicholas/Wired/WIRED4-12.html (2 of 3) [28-4-2001 14:11:31]

WIRED 4.12 - Laptop Envy

Even the display has limits. You really don't need more than 100 pixels per inch. Today, display brightness and contrast are more important than resolution - so there goes power again (until somebody invents a good reflective display). But I do have one new requirement - something that planes and boats have and cars soon will. I want my laptop to know where it is. At a basic level, this means knowing about time and time zones. However, I mean something much more refined, including the ability to correlate longitude and latitude with cities, so that my laptop will know what town it's in, what language to use, what local telephone numbers to dial, and what protocols to use for Net access. Let it worry about changing dial tones or the need to use pulse versus touch tone. Computer vendors: You have the form factor about right. Stop producing smart-looking, powerhungry machines, and move toward simple-to-use, smart-acting machines. A simple start is letting my laptop know where it's situated. Next Issue: The Future of Paper [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.12 December 1996.]

http://www.media.mit.edu/~nicholas/Wired/WIRED4-12.html (3 of 3) [28-4-2001 14:11:31]

WIRED 5.01 - Surfaces and Displays

NEGROPONTE

Message: 43 Date: 1.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Surfaces and Displays

Joe Jacobson, coauthor of this article, believes that paper is a medium for the future. A medium that will build on its current ubiquity, but in an exciting and revolutionary way. qHow important are paper and ink in today's world? One in seven US patents makes mention of either paper or ink - more than make mention of any type of electronics! Hard to believe? Look around your office or home and count the number of items that have some form of print on them, then compare that with the number containing chips. The phenomenal readability and economy of printed ink on paper compels us, even in the digital age, to mark our behavior in this age-old manner. There is no lag when going from page 1 to page 44 of a book and then back to the appendix. So, too, with a newspaper. The presentation is immediate. No start-up, no logon, no button click, just paper where and how you expect it. Ink is great because every page and object gets its own. You don't have to go to a special corner of your desk to see ink. It's everywhere.

Electronic ink
One disadvantage to ink is that it's tough to erase. What we need is electronic ink that can be printed as freely onto as many different surfaces as traditional ink, but that is electronically mutable. It should be able to get up and walk away and change its shape, color, or intensity. Joe's ink can do all this. His secret takes a page from carbonless paper. The back of carbonless paper has a thin coating composed of tiny capsules filled with clear ink. These capsules, about 1 million per square inch, are then broken with the pressure of your pen. When the clear ink oozes out the back, it chemically changes a colored ink on the page underneath. Now, put that thin coating on the front of the page, and instead of putting ink in those capsules, imagine stuffing them with ping-pong balls one one-thousandth of their normal size, black on one side and white on the other. Then add some lubricant. Assuming you can control the rotation of the contents of each capsule - independently, electronically, and with the knowledge of where it's facing - you have electronic and reusable paper.
http://www.media.mit.edu/~nicholas/Wired/WIRED5-01.html (1 of 3) [28-4-2001 14:11:33]

WIRED 5.01 - Surfaces and Displays

Given that the flat-panel display market is US$30 billion per year and growing, Joe is not alone in his quest. Enormous energy and thought is being given worldwide to making better computer displays. The current standard is the thin-film transistor LCD. It draws 2.6 watts, costs about US$1,000, and is constructed on glass. TFT displays are expensive because their million or more transistors are spread over the large screens. They consume generous amounts of power because the TFT backplane eats about a watt, as does the required backlight (transmissive LCDs let through less than 20 percent of the light). Because of the glass sandwich they are packed in, LCDs are not as rugged and cannot be used as flexibly as they should be. Technical improvements can still be made, and electronics companies around the world are investing billions of dollars in research and manufacturing facilities to do so. So, how can Joe compete with these deep-pocketed giants? Simple: he looks at the problem differently. It's not a display he is building. It's ink. The advantage of his mind-set is that ink is more general than paper. It can go on almost anything, and it-s cheap. To make a display, just add a grid of addressing lines - which, by the way, is just another type ink (of the conductive variety) - to control the behavior of your e-ink.

Paper comes alive


Once you've got working e-ink, there is nothing to stop you from binding several hundred epages to construct an e-book worthy of the name. Coming from the flat-panel LCD point of view, one would never envision an electronic book containing hundreds of displays. It would be much too heavy, too power hungry, and way too expensive, not to mention fragile. But e-ink gets you there. My grandchildren and Joe's children may carry around a single volume containing a whole library of books whose pages are used over and over again. No other book would be required. But let's go one step further. When your printer is loaded with conductor e-inks, you need not stop at books. Everyone agrees that shipping newsprint is absurd. Yet few people read their news on a screen (I may be one of the few). In general, the screen is not in the right place - you are forced into a specific position and cannot always take the monitor with you. What screens do allow is easy change, be that video, personalization, or up-to-the-minute news. Not a new concept, by the way. When Thomas Edison was 14, he set up his famous printing press in the baggage car of Port Huron's Grand Trunk Express. He received the daily news via telegraph, which he would then typeset and distribute as an up-to-the-hour newspaper on the train ride to work. The same thing can be done with e-ink.

Radio paper
It turns out that the conductive inks used to make e-paper can function as radio antennas.
http://www.media.mit.edu/~nicholas/Wired/WIRED5-01.html (2 of 3) [28-4-2001 14:11:33]

WIRED 5.01 - Surfaces and Displays

Other inks used in e-paper can be turned into radio transistors. This makes "radio paper," which can be as thin as notepad stock and sit on a coffee table or in your pocket, receiving FM news broadcasts. It "typesets" itself - every hour or day - with the latest news. With e-ink, a single piece of paper displays the news for years. By extension, any surface can now be modified into a display. Wallpaper of the future will be sold by the gallon in one customizable color, billboards will be painted once, wine labels will tell you when to drink the bottle, T-shirts will be watches, and our trees might live a little longer. This paper was coauthored with Joe Jacobson, assistant professor of Media at the MIT Media Lab. Next Issue: Pay Whom Per What When [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.01 January 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-01.html (3 of 3) [28-4-2001 14:11:33]

WIRED 5.02 - Pay Whom Per What When, Part I

NEGROPONTE

Message: 44 Date: 2.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Pay Whom Per What When, Part I

When you fly your dog across the Atlantic, you pay a fixed fee. By contrast, inter-European airfare is determined by your dog's weight. Now, imagine that you couldn't buy an individual airline ticket for yourself or your dog. Instead, suppose airlines offered an annual pass that covered an unlimited number of flights. Or, imagine that you could buy individual tickets and the price was determined by your weight. Americans can't seem to comprehend the British National Health Service, yet they pine for the simplicity of Steve Forbes's flat tax. We hold strong if seldom consistent views on how we should pay for things. The digital world has created an opportunity to totally rethink billing methods. In turn, we will be forced to revisit the fundamental concepts supporting service cost and customer value. Most debates today on whether and how the Internet should be tariffed are mere reruns of debates that already have been played out in the world of atoms. Tomorrow's debates will be different. They will focus on issues unique to the world of bits. Unlike atoms, bits aren't consumed by consumers. They can regenerate - infinitely.

Subscribing to subscription
During the next few years, we will witness an explosion of payment methods. Yet, unlike the 40odd calling plans offered by your cellular telephone provider today, payment plans will combine two principal ideas: flat fee and pay-for-use. Neither of these is superior to the other, but we'll see innovation in both. The mind-set of most netizens, both consumers and service providers, supports a flat fee - a fact the European telcos still don't understand. But the digerati are at fault as well for not recognizing the rush toward pay-for-use. Both can, and will, benefit the consumer. From the consumer's point of view, there are two apparently contradictory arguments in support of flat fees: certainty and serendipity. People like to know in advance what something will cost, even if the flat fee results in a cost that may be higher than they would have paid "by the
http://www.media.mit.edu/~nicholas/Wired/WIRED5-02.html (1 of 3) [28-4-2001 14:11:35]

WIRED 5.02 - Pay Whom Per What When, Part I

meter." Also, people want to browse, window shop, or find some unexpected treasure without the sound of a meter ticking. From the seller's point of view, flat rates offer even greater advantages. First, cost savings. Fifty percent of the price of a phone call covers billing - the cost of a call is cut in half right away by changing billing rates to a flat fee. Second, the relative certainty of income. The cost of a magazine subscription - typically much less than the price of a year's worth of newsstand issues - serves as an excellent example. The information provider is guaranteed a certain amount in sales and, further, gets paid in advance - both of which help cash flow. The more the provider has to invest in advance of sales, the better this appears.

Marginal marginal costs


The flat-fee concept is particularly desirable when marginal costs to the supplier remain, well, marginal. Europeans are astonished by the American practice of refilling the customer's coffee cup for free - try that on the Piazza San Marco or Champs-lyses. Yet the marginal cost of an extra cup of coffee is low, making the practice well worth the price, given the possibility of attracting new customers - even those who don't drink the extra coffee. In the world of bits, marginal costs are often indistinguishable from no cost. Once a user consumes a few bits, why not let him or her have a few more for free? What if the marginal cost to a supplier, who offers increased use, was actually negative? The increasing incidence of advertising on the Net makes this a growing possibility, so much so that I predict within three years few people will pay their local ISP for Internet access, and potential high-value customers will be paid to surf.

Digital dumping
Japan consistently has been accused of dumping, whether it's supercomputers or semiconductors. The complaint results from the allegedly predatory practice of taking huge losses until the competition is obliterated, at which point a completely monopolistic position can be taken resulting in exorbitant fees. American trade associations cry foul. Congress is never far behind. Yet Netscape emerged and gave its browser away for free. Now, with 70 percent of the world's market share, the company charges US$49 and up per copy. Not a peep from anyone. Is this because Americans whine about dumping only until we do it successfully? No. It's because there is an unspoken acknowledgment that the rules of trade have changed. Bits aren't sold the same way atoms are sold. Netscape introduced an entire new class of payment. Instead of a one-off payment for a given capability (x) or a usage-independent subscription (x/t), Netscape pioneered the idea that what you pay for is effectively the rate of change in functionality (dx/dt, for the left-brained). All forms of usage-independent pricing, however, have their downside. If you use something rarely, why pay a monthly overhead? As someone who drives little, I find the Swiss and Greek
http://www.media.mit.edu/~nicholas/Wired/WIRED5-02.html (2 of 3) [28-4-2001 14:11:35]

WIRED 5.02 - Pay Whom Per What When, Part I

systems of annual road fees far less attractive than the French and Italian toll systems where you pay as you go. Alas, the cost-savings argument of a fixed fee is rapidly disappearing for suppliers. This is due to the falling price of computer cycles and the introduction of new forms of electronic payment, both of which help reduce the cost of transactions to virtually zero. Today it costs a dollar to process a check and 25 cents to handle a credit card transaction. When payment systems cost a penny, the case for fixed-fee quickly erodes. However, the real driver for pay-for-use is more subtle: it is the opportunity to tie payments more closely to customer value, as discussed in the next issue. This article results from conversations at CSC Index Vanguard meetings with Richard Pawson (rpawson@csc.com). Pawson, who coauthored much of the text, is director of research for the CSC Index Foundation. Next Issue: Pay Whom Per What When, Part II [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.02 February 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-02.html (3 of 3) [28-4-2001 14:11:35]

WIRED 5.03 - Pay Whom Per What When, Part II

NEGROPONTE

Message: 45 Date: 3.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Pay Whom Per What When, Part II

The search for an efficient means to handle micropayments has opened all sorts of possibilities on the Net. One of my favorite examples stems from a game under development by Rocket Science Inc. It is a Dungeons & Dragons-style role-playing game, which is given away and run over the Net at nominal or no cost. How does Rocket Science make any money? Here's how. You find yourself in a beautifully rendered medieval castle, face-to-face with a green, smokepuffing, long-toothed dragon. You (actually your avatar) are dressed in a terry-cloth bathrobe, which is fine for stepping out of a hot shower but crummy for fighting dragons. Then you notice some nicely polished armor hanging on the dungeon wall. Guess what? You can rent it for five cents to fend off the monster. Significantly, the vendor has linked the pricing closely to the value received, or at least perceived, by the customer. The developers of future adventure games might even stoop so low as to exorbitantly charge the person trapped in a corner for a spear to ward off a band of ogres.

Buying looking versus eyeballs


Another example involves the current debate over how online advertising should be priced. Advertising in conventional media is priced in proportion to the number of readers or viewers, whether or not the ad is actually seen. A few years ago, a controversial campaign by an association of print media attacked the effectiveness of television ads. It placed an ad that showed a partially clothed couple, otherwise engaged, on the sofa in front of the television. Pointing out that according to the Nielsen ratings this couple was supposedly watching TV, the ad posed the question: Who's really getting screwed? With Web-based advertising it's possible to know how often and how long an ad appears on users' screens, and whether a user clicked on it for more information (now commonly called a click-through). Buyers - whether of advertising, videogames, or air travel - seem to assume that they will be
http://www.media.mit.edu/~nicholas/Wired/WIRED5-03.html (1 of 3) [28-4-2001 14:11:37]

WIRED 5.03 - Pay Whom Per What When, Part II

better off in an era of value-based pricing, but this is far from true. Some cable companies offered live coverage of the November Tyson-Holyfield fight at US$9.95 per round - the longer the fight, the more you pay. Eleven rounds later it didn't look like such a good deal (although the price was capped at $50). What shall we expect next time? Twenty cents per punch or $20 per half-pint of blood spilled?

My bit is bigger than your bit


Discussions of pricing mechanisms in the wired world have largely focused on increased connectivity (the death of the go-between) and increased processing power (lower transaction costs). The real significance of the Net, however, lies in a greater understanding of context, and context holds the key to customer value. Airlines euphemistically call this yield management. Advanced pattern-recognition algorithms compare the current booking status of any flight against previous data and predict the expected value of every open seat on the plane, in some cases on an hourly basis. The system then decides whether to make a discount fare available or hold out for the full sticker price. What if the system could make use of more personal knowledge, gained from your frequent-flyer record or from public information? "I notice, Mr. Negroponte," says an atonal, HAL-like voice, "that you will have to take this flight in order to make the conference in Lisbon where you are the keynote speaker. Under the circumstances, we feel justified in charging double the regular fare ..." Differential pricing is a common practice in the world of atoms. Seniors get discounts on London buses. Children get into Disneyland for less than adults. Student passes make European travel affordable. Yet this almost never happens when using online services, in part because the industry is young and in part because authentication is difficult - though not impossible. Knowing who the user is will afford new and attractive solutions for selling and protecting intellectual property. For example, a child doing homework will be able to use this back page free, while an adult consulting it for a business plan will need to pay. Workable? Actually, it is.

Wall Street, move over


The thing about value-based pricing is that it cuts both ways. Three months ago I told the story of a person who used the Internet to find other potential auto buyers. They pooled their collective buying power to negotiate a better price with the dealer. An even less complex arena will evolve from the emergence of a wide range of bid-and-offer marketplaces, simply cloned from the financial and commodity markets. Six years ago, an outfit in California attempted to launch a true bid-and-offer electronic market for airline tickets. It was a little ahead of its time, but with the benefit of pervasive access via the Web, such an idea could triumph today. In Singapore, electronic bid markets are used to buy
http://www.media.mit.edu/~nicholas/Wired/WIRED5-03.html (2 of 3) [28-4-2001 14:11:37]

WIRED 5.03 - Pay Whom Per What When, Part II

the license to own a car (which costs more than the car itself) - an effective if less-thanegalitarian approach to regulating traffic. The government of Western Australia has long employed this approach to source everything from telephones to toilet paper. Where I would like to see the technology applied is in plain old telephone service. "Mr. Negroponte, this is AT&T's international, line-load balancing system. Our loadings are light tonight, so we can offer you an hour's conversation with your son in Italy for just $5. Press 1 to place the call." "Hello AT&T, this is Nicholas Negroponte. I'd like an hour's videoconference at 128 Kbps with my mother in London within the next 48 hours. Any time of day is OK. I'm offering $10. Call me back when you're ready to place the call." This article results from conversations at CSC Index Vanguard meetings with Richard Pawson (rpawson@csc.com). Pawson, who coauthored much of the text, is director of research for the CSC Index Foundation. Next Issue: Dear PTT [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.03 March 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-03.html (3 of 3) [28-4-2001 14:11:37]

WIRED 5.04 - Dear PTT

NEGROPONTE

Message: 46 Date: 4.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Dear PTT

Most Americans don't realize that less than 10 countries in the world have private and competitive telephone systems. The rest have government-owned monopolies. These are usually part of a post and telecommunications ministry, commonly known as Posts, Telephones, and Telegraphs, and are run by civil servants whose government employer is both a telecommunications regulator and a service provider - clearly positions of conflict. The PTTs provide evidence, once again, that sovereignty and enterprise make strange bedfellows. Because of government ownership, PTTs can get away with poor service and unilateral decision-making, while realizing handsome profits from sinful pricing. A recent six-minute local call from a pay phone in Switzerland cost me CHf17 (US$12). During the summer of 1993, the Greek government fell to the socialists, largely because the New Democrat Party threatened to privatize the phone company. This would have resulted in downsizing in an economy where around 5 percent of the population are public employees. At the time, I was told that incomplete telephone calls make up 25 percent of the Greek phone company's income. Of course, there is no way for me to check this figure. But I wouldn't be surprised. Two years earlier, I actually received a monthly $2,000 phone bill for a period of time I was not in Greece and my phone was disconnected. "Too bad," they said. "If you don't pay the bill, we will cut off your phone service - don't bother contacting any of our subsidiaries, such as the police, the judicial system, or the better business bureau."

Sell-off boon - but for whom?


Such shenanigans will come to a halt as more phone companies are privatized and more telecommunications environments are opened to competition. This is elementary. But in the process of moving ownership and enhancing competitiveness to the private sector, what will be done with the money these countries realize from the sale of their telephone companies? Western European privatizations in general were worth $43 billion in 1996, which included Deutsche Telekom's $13 billion IPO, Europe's largest ever. No government can ignore this opportunity to reallocate social assets. Future consumers will be better served by liberalization,
http://www.media.mit.edu/~nicholas/Wired/WIRED5-04.html (1 of 3) [28-4-2001 14:11:39]

WIRED 5.04 - Dear PTT

and current politicians (or potentates) will be remembered for filling their nation's coffers. But therein lies the rub - should this money be used to patch up the general indebtedness created by politicians, or should it be invested in the people who produced it in the first place? The Turkish government boldly states that it will use the money from the sale of its PTT to cover national debt and help cope with an inflation rate greater than 100 percent per year. This seems very wrong to me. Turkish citizens have been paying high prices for poor telephone service for years. The value of a national PTT is due, in large measure, to the citizens who have been good and faithful clients. As shareholders in government and stakeholders in telecommunications, these citizens deserve better spending plans when their government receives such a large windfall. So, here is my suggestion in the form of a short open letter: Dear PTT, My sincere congratulations for your plans to privatize your phone company. But what will you do with the money? Let me offer a suggestion: connect your elementary schools to the Internet and provide as many personal computers as you possibly can. If you put as little as 10 percent of this nonrecurring revenue into wiring your schools, you would be investing in your future. Your children don't have access to enough books. Tomorrow they could have access to the world's libraries. Unlike us, they could grow up with a global perspective, seeing and learning from many different points of view. What stands between kids and education is resolve - yours. It once was money, too. But you and your government are just about to get a basketful of that. Your biggest natural resource is the human capital of your children. Surely they deserve just a fraction of the proceeds from this historic event.

Reality check
From a macroeconomic perspective, one can argue that government money has no color whether from taxes or the sale of public companies, it is all the same. However, from the taxpayers' point of view, there is a sense that government should honor its word, and a belief that, for example, a road tax should be applied to roads. Perhaps we can establish the same sense of accountability for a wired society. Only $6 billion a year is needed to meet the worldwide need for basic, primary education, which currently reaches only 80 percent of children. Unicef strives to kindle a sense of absurdity by juxtaposing this modest $6 billion against the $40 billion per year spent on golf and the $85 billion a year spent on wine. But it's difficult to tax sports and drink spent in country A for
http://www.media.mit.edu/~nicholas/Wired/WIRED5-04.html (2 of 3) [28-4-2001 14:11:39]

WIRED 5.04 - Dear PTT

education costs in country B. My suggestion is largely an expedient, connecting cause and effect, using a one-time windfall as a one-time start-up cost, because it is likely that all countries will privatize their telecommunications within the next 10 years. In the United States we estimate that $10 billion to $20 billion are needed for the one-time charge to connect all K-12 schools. Vice President Gore is doing a good job of raising the sensitivity of the nation's citizens while providing incentives for companies to step in. Other nations don't fare as well because they are less digital - in terms of their citizens and their leaders. Why are they less digital? Partly because of the PTTs. Germany is a good example: the old Deutsche Telekom made it prohibitive to be online. So, dear PTT, even if my argument does not stand up on logic, I hope you'll do the right thing anyway - out of a sense of shame and guilt. Next Issue: Tangible Bits [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.04 April 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-04.html (3 of 3) [28-4-2001 14:11:39]

WIRED 5.05 - Tangible Bits

NEGROPONTE

Message: 47 Date: 5.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Tangible Bits

At the age of 2, Hiroshi Ishii experienced his first PDA - an abacus. Even though this calculator was used primarily to manage Mrs. Ishii's budget, Hiroshi found many more interesting "dual uses." In fact, he used the abacus as a musical instrument, a toy train, and a back-scratcher. His mother didn't mind, and Hiroshi soon learned the "music" of addition and multiplication at the simplest level: the tune meant that the beads - predigital bits - were in use. Forty years later, Ishii is determined to carry the idea of tangible bits forward and into the problem of making the human-computer interface seamless with the physical world. This mission obviously includes sound, dual uses, and - most important - the engagement of muscles and motor skills. Although somewhat less apparent, it includes attending to the peripheral senses - call these the ambience.

Feely-touchy
Transforming human-computer interaction from abstract mousings and keystrokes into handson engagement is the challenge Ishii and his students are addressing. They are building interfaces in which bits are embodied and grasped as physical objects and surfaces. Physical icons, or "phicons,"are small objects that serve as both handles for, and containers of, information. A prototype of such an interface exists in the form of a horizontal display surface that senses the physical objects placed on it. This interface allows computer-generated video, graphics, and 3-D models to be accessed by placing a phicon on the display surface, for example. You interact with the content by manipulating the phicons, inspecting the space with "lenses,"or probing the space with "instruments."

World as interface
Yet there is a world of sensation beyond that which can be grasped with our hands or stared at with our eyes. Ishii dreams of mapping Earth's nuances of warm and cold fronts, trade winds, and tidal waves into the circulating currents of his hot tub. Why? Because, he explains, experiencing a hurricane in the Bahamas as a whirlpool around your ankle or a monsoon in
http://www.media.mit.edu/~nicholas/Wired/WIRED5-05.html (1 of 3) [28-4-2001 14:11:41]

WIRED 5.05 - Tangible Bits

Asia as a warm spot on your shoulder blade allows your skin to become the interface between the meteorological world and you. Caught within the "painted bits" of glowing pixels, Ishii returns to his childhood abacus for a vision of future interface design, intent on using everyday physical objects and surfaces - a world full of incense bottles, writing desks, and window glass. As humans, we have myriad skills for processing information through tactile interaction with physical objects. The idea of tangible bits includes peripheral senses. Note that you often close your eyes when "feeling" something or while trying to determine the source of a particular sound. This process of concentration extends to ambience itself. You know something without "looking" at it. But while computing, all we normally touch are transducers, and what we see is always "in our face." Yet our peripheral senses and the surrounding activity are equally important. Why has the use of background displays been lost in computer interaction? Could this stem from a cultural divide that Ishii's Eastern perspective reveals?

Making real and virtual seamless


Once the virtual world and the real world interpenetrate, the interface disappears. The Macintosh is easy to use because knowing one action set - see-point-click - allows users to navigate the menus. Ishii's interface is even simpler. His idea of interface is seamlessly coupled with the physical world. You manipulate things using your innate knowledge of the physical world. If you can pick up a mothball, you can run Ishii's computer. His "computer" is a small room that is augmented with computer-controlled lights, shadows, sounds, airflow, and water movements. These communicate information to the user's peripheral attention - at the background of awareness - leaving the user free to concentrate on other tasks in the foreground, those which we refer to as "at hand." In this space, light reflecting off rippling water moves gently across the ceiling to communicate the activity of a loved one (e.g., the lab's pet hamster skittering on its wheel). Changes in lighting and an audio space that features birdcalls and thunder convey email or information on Net traffic. Past activity can be reviewed by turning back the hands of a physical wall clock.

Sicilian kitchen
The curtain rises; it is 2020. Quantum computing made the metric of qips (quadrillions of instructions per second) obsolete years ago, and computer interfaces are equivalent to the approximate age and maturity of the 1978 automobile interface. To view the true state of the art, we visit a Sicilian kitchen and look to the center table, only to find Bread. Pasta. Olive oil and an overripe tomato. Perhaps the bread knives are edged with

http://www.media.mit.edu/~nicholas/Wired/WIRED5-05.html (2 of 3) [28-4-2001 14:11:41]

WIRED 5.05 - Tangible Bits

guaranteed-never-to-go-dull nanoceramic and the oven is fusion fired. The only glass screens in the kitchen are found on a window overlooking the garden and the oven door (both nanocleaning, of course). The only keyboard resides on the faux-vintage typewriter. And all the mice play tag with the cats. This Sicilian kitchen is digital, of course, but it is also intimate and inescapably physical. While a few frantic folk and workers of the midnight hour consume energy pills, the Sicilians take pleasure in their food and embrace its substance and its preparation. Ishii and I join the Sicilians in this quest to maintain the primacy of the physical world as interface - and strive to make the recipe books, green peppers, and wine bottles of the future proud. This article was written with Professor Hiroshi Ishii (ishii@media.mit.edu), who founded and directs the Tangible Media Group at the MIT Media Lab, and his graduate student assistant, Brygg Ulmer (brygg@media.mit.edu). Next Issue: 2B1 [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.05 May 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-05.html (3 of 3) [28-4-2001 14:11:41]

WIRED 5.06 - 2B1

NEGROPONTE

Message: 48 Date: 6.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

2B1

There is a new force in the world: the growth of cyberspace. Inherent in this force is a breakdown of barriers. Everyone talks about crossing barriers of geography, gender, and culture. But the most important barrier is perhaps the least appreciated: the barrier of age. Empowering kids is a double whammy because they're the ones who will most effectively break down the other barriers as well. The children of the world are critical to achieving a united world. Those of us who grew up in multiracial societies are likely to be more racially unprejudiced than our parents. I see the same difference in people younger than me, who grew up in a more gender-enlightened era; many just cannot understand how much of an issue gender was in my time. I bet the kids of tomorrow will have the same feeling about nationalistic thinking. In fact, we are looking at a generation that will feel about culture the way most of us today feel about race and gender - identity and unity, being individual and plural at the same time. What's wrong with this picture is that more than 50 percent of the 1.2 billion children ages 6 to 11 have never even placed a phone call. Yet the suggestion to give the kids of the world access to technology raises an obvious question: What sense is there in providing computers and Internet access to children in nations where there is inadequate food, clothing, and medicine? The short answer: lots.

Dj vu
In 1981, French president Franois Mitterand gave author Jean-Jacques Servan-Schreiber the mandate to establish a World Center for Computation and Human Development. The idea was based on Servan-Schreiber's book The World Challenge. Simply stated, developing nations should and could leapfrog the industrialization process and jump into the trade of bits, instead of atoms. What gave this idea substance and credibility was the work of Seymour Papert, who had just
http://www.media.mit.edu/~nicholas/Wired/WIRED5-06.html (1 of 3) [28-4-2001 14:11:43]

WIRED 5.06 - 2B1

published Mindstorms. Papert's theme of "teaching children thinking" was a natural complement to The World Challenge. And, with the initial backing of the then-wealthy OPEC, these crazy ideas started to make sense. Saudi leader Ahmed Zaki Yamani delivered a powerful address on human development that fall in Vienna. Paraphrased, he said, don't give a poor man fish, give him a fishing rod. The leap from a fishing rod to a personal computer was, for some of us, easy. The center's work focused on the use of computers for primary education in developing nations. The first site was a school outside Dakar, Senegal. This small experiment was just terrific; the kids had most fun teaching the principal. Kids from the jungle learned faster than kids from the city. The second location was Colombia; it had the full personal commitment of President Belisario Betancur Cuartas. For a short period, this outrageously bold idea looked like it was going to be the beginning of something very big and important. It was not. Within months, the original mission was pushed aside in favor of addressing more immediate needs in France, where, after all, the center was based. Within less than six months, the "world challenge" was replaced with "Frances need" - installing a national fiber-optic system.

Timing
The 1981 Paris initiative was way ahead of its time. Even if it had not unraveled for other reasons, it would have failed because of the absence of global telecommunications and the rarity of personal computers. The IBM PC had not even been introduced in Europe. Today, the timing is right. Two major forces fuel this timeliness: worldwide awareness and use of the Internet and the spread of personal computers into the lives of children - at school and at home. Because of these forces, a group of us has created a nonprofit organization called 2B1, whose purpose is to bring the digital world to kids in those places least likely to provide access to it. The idea is not to go country by country, but to target the world as a whole. Sounds cuckoo, but it isn't, because the Net itself and the children using it now are very much part of the solution. In parallel, the MIT Media Lab is also focusing on children, learning, and human development. The scientific and technical questions it faces range from language translation to storytelling to cultural understanding to the roles of nonverbal language.

Developing digerati
On July 17, MIT and 2B1 are cohosting a five-day workshop that will bring together people who have taken bold initiatives in bringing computers to children who live in technologically isolated
http://www.media.mit.edu/~nicholas/Wired/WIRED5-06.html (2 of 3) [28-4-2001 14:11:43]

WIRED 5.06 - 2B1

places. For example, teachers who have defied the logic that you need to provide more chalk before you bring a computer into a primary classroom. Or social activists who have brought computers to street children who don't have schools at all. But especially those who have found ways even more imaginative to bring children into cyberspace. Check out www.2b1.org/. We will pay travel, room, and board expenses for as many people as we can afford, with a strong priority given to getting at least one or two individuals from every developing nation. Do you know somebody who should attend? Our goals for the meeting include developing a 2B1 plan of action, collaborating with existing groups, and establishing a major granting program of hardware, telecommunications systems, and know-how. Feels big? You bet it does. But just like the distributed Internet, this too can grow. In fact, the Net is the encouraging force. It is both global and popular - and what we did not have in 1981. 2B1 is a nonprofit foundation, whose president is Peter Cawley (peter@2b1.org), vice chair and chief scientist is Seymour Papert (seymour@2b1.org), and director of product development and interface design is Dimitri Negroponte (dimitri@2b1.org). Other participants include myself, Saj Nicole Joni, Tom Grant, Rodrigo Arboleda Halaby, and others mentioned at the Web site. Next Issue: Digital Obesity [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.06 June 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-06.html (3 of 3) [28-4-2001 14:11:43]

WIRED 5.07 - Digital Obesity

NEGROPONTE

Message: 49 Date: 7.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Digital Obesity

Recently, I've been forced to look for new hardware and software and have since been suffering the indignity of updating myself. I cannot believe that manufacturers have gone so wildly astray while I wasn't looking - complexity is out of control. I have spent much of my time in front of a keyboard and display in the past 30 years. People have joked about my dependence on email since 1970, and older flight attendants remember seeing me using a laptop since 1979. In fact, I don't know anyone more wired than me in his or her daily life. This is my way of saying I'm no piker. But computers can be like ski boots. Old-timers are prone to keep their well-worn and comfortable equipment. Upgrading to the newest boot styles each year would raise hell with one's feet. Likewise, I am old-fashioned in my digital ways. I don't even use an email program but ride bareback on Unix instead. But inevitably, there comes a time when those favorite, laced leather boots need to be exchanged for a new pair. That time arrived in early 1997, and the new, modern digital headaches I discovered still haven't subsided. Mind you, I'm lucky. I have the full and generous support of some of MIT's finest technical staff at my disposal. I wonder who the rest of society turns to.

Overweight software
The problem displays itself as featuritis and bloated software systems. I am fond of quipping about how every time Andy makes a faster processor, Bill uses more of it. Turns out it's not so funny. Have you looked at the size and complexity of Microsoft Word recently? Outrageous. And each successive version has gotten worse. It's to the point where most programs are almost unusable and run slower than what I used a decade ago. What is wrong with you Redmond folk? Maybe you'll learn something about ease of use from your recent purchase: WebTV.

http://www.media.mit.edu/~nicholas/Wired/WIRED5-07.html (1 of 3) [28-4-2001 14:11:45]

WIRED 5.07 - Digital Obesity

My adult and professional life has been spent trying to make computers easier to use, starting as far back as 1965. In those early days, people thought only sissies needed graphics. In 1972, when we devoted 256K to storing images, most people wrote it off as just another indecency and MIT arrogance. Why would anybody in their right mind commit so much memory to the icing, not the cake? Three decades later, we find a generation of kids who count memory not in Ks, but in Ms (and soon Gs). This is actually quite wonderful, but look at what we are using it for. The interface hasn't fundamentally changed since the introduction of the Macintosh more than a decade ago. It's just harder to use and obscenely obese. Someone needs a wake-up call. As a longtime devotee of Apple computers with a dozen active Macs currently in my life, I find myself extremely frustrated with the latest models. The little computer that greets you with a smiling face on start-up has become so complex that a Mac is now no simpler to use than a Wintel machine. So, like many, I decided it was time to switch platforms. I made my first foray into Windows two months ago and was so appalled that I raced back to the Macintosh like someone returning to a smelly bus after trying the newer subway system. I am amazed that so many people use Windows 95 without complaint. I guess there is a grin-and-bear-it attitude because THERE IS NOTHING ELSE. Yes, I am yelling.

Not PC or NC, but SC


People constantly ask what I think of the network computer. One result of that questioning is the appearance of headlines like "Negroponte calls Ellison a nutcake." Of course, the reporter forgot to quote what followed: "in the best sense of the word." Anything that makes our digital lives simpler is welcome. Larry Ellison gets in his own way with vituperous rhetoric about how the NC will obsolesce the PC and how Microsoft's evil empire will thus be crushed. Bill, for his part, had dismissed the NC with equal bravado until recently, when he jumped on the bandwagon with the Windows Terminal. The sad fact is, NC or PC, they are both wrong - dead wrong. But you and I are going to do the dying for a while. We suddenly have no choice. The world does not want a PC or an NC, but an SC - a scalable computer. In short, this is a modular machine that can be as simple as pie (and not cost much more) - as well as being able to grow from low-cost box to high-end supercomputer. Personally, I am most interested in the low end of this scale. Why? Because there is no room for Windows 95 in Africa. Many other parts of the world also need affordable computing. I always thought this was a different problem from the one plaguing me. But suddenly I realize that even with so much of MITs computing talent at my disposal and no care whatsoever for what things cost, I am no better off than peasants in Pakistan confronted with their very first computer. Today's machines are just too complex to be
http://www.media.mit.edu/~nicholas/Wired/WIRED5-07.html (2 of 3) [28-4-2001 14:11:45]

WIRED 5.07 - Digital Obesity

accessible. But what is there to do about it - other than bitch? Is it time for a strike or a users- cartel? You bet it is. Whoever is guiding those young folks making the operating system and applications of tomorrow should put his or her foot down. It is time to lose weight. Stop making software that options you to death and start delivering simple, easy-to-use apps. The stuff you write is written by geeks, for geeks; why not try writing something for the rest of the world? An interim solution or holding pattern might be to eschew those beastly apps and recommend beginners to the Internet - through an online system like AT&T WorldNet. But when I went to install it myself, the instructions' first words, printed right on the CD-ROM, were: "Turn off the virus-protection software using the extensions manager." What the hell does that mean to Mom and Dad? Then, perhaps out of spite, the installer crashed my system. Next Issue: RFHelps Marriage [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.07, July 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-07.html (3 of 3) [28-4-2001 14:11:45]

WIRED 5.08 - Wireless Revisited

NEGROPONTE

Message: 50 Date: 8.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Wireless Revisited

In the early days of cellular telephones, service providers touted "anything, anywhere, anytime," which I thought was real stupid. I suggested that a better jingle would be "nothing, nowhere, never - unless," where the "unless" clause was the added value of the wireless transmission, the real service being offered (see "Prime Time Is My Time," Wired 2.08, page 134). Since those days, wireless service has grown in all sorts of places, for all sorts of reasons. For example, wireless phone systems are widely installed in developing nations because of their rapid deployment and low cost, even if there is no real need for mobility. Or, look at some developed nations, where people will carry cell phones purely for reasons of security, hardly the original purpose. Like everybody else, I have a cell phone but never leave it on. In large part, this is because I don't want to be disturbed. It is also because I don't "do phones." I find email far more effective, and, as a result, I use telephony mostly for data, mostly from fixed landlines. Therefore, my dayto-day experience with radio frequency (RF) has been somewhat limited.

Socially acceptable bits


Recently, this changed. To my surprise, the change was both profound and obvious and had nothing to do with cell phones. Here is what happened: I installed a wireless LAN in my home. Mind you, I was one of the last to do this at the Media Lab. The system provides a 2-Mbps connection throughout my house - and that of my neighbor's (since I live in the city). I'm using Digital's RoamAbout system, based on Lucent's WaveLAN technology, which uses spreadspectrum transmission techniques. I am told it only modestly - more or less imperceptibly interferes with household cordless phones (but since I don't have one, I'm not sure). My neighbors have not complained. Operating a wireless LAN in your home has a stunning effect, especially in conjunction with a thin, lightweight laptop as elegantly designed as the IBM 560 ThinkPad. (Apple, take note.) The result is a new kind of socialization.

http://www.media.mit.edu/~nicholas/Wired/WIRED5-08.html (1 of 3) [28-4-2001 14:11:47]

WIRED 5.08 - Wireless Revisited

In the past, I would excuse myself from the dinner table, watching TV on the couch, or lazing around the house to go off and work at a keyboard. Being online meant not being a part of the household. But no one complains when you pick up a newspaper, magazine, or book while others are watching TV. Right? Now, I can do the same with the Net and the Web and be no more antisocial than if I were reading a magazine. Think about it. Sounds trivial, but it sure nullifies the complaint my wife has had for more than 20 years: she says that my back is all she usually sees. Not any more. This got me thinking: Was the Negroponte Switch correct after all?

Gilder can make you famous


George Gilder and I have shared the podium frequently, and I have learned a lot from him. One of our first encounters occurred about 10 years ago at an executive retreat organized by Northern Telecom (now called Nortel). At this meeting, I showed a slide that depicted wired and wireless information trading places. This idea had been prompted, in part, by some early HDTV discussions, during which I and others questioned whether broadcast TV should get any spectrum at all, since stationary TV sets could be better served by wires (read: fiber). In contrast, the theory continued, anything that moves needs to be wireless. Phones, largely wired at the time, would go wireless, and TV, largely wireless, would get wired. Gilder called this "the Negroponte Switch," even though Jim McGroddy at IBM or someone at the Media Lab may have suggested it first. A decade later, it seems that this whole switching of places has been contradicted left and right. Satellite TV is doing fine. HDTV just got new spectrum. And the cable business is starting to include telephony. So how should one look at RF today?

Granularity
Many cell-phone users, believe it or not, think they are using a walkie-talkie-style communications system that is completely wireless - from one handset to another. In truth, most often there is a lot of wire in between. Typically, the wireless portion is only a fraction of the distance covered. For this reason, instead of the simplicity of the Negroponte Switch, think of the more complex public/private nature of the bits. Bits will travel wirelessly in proportion to the degree to which they're public. The bits that represent the Super Bowl, for example, are well justified for delivery by satellite TV. There really is no better way to get the same bits to 150 million Americans simultaneously. My phone or computer, however, merit less wireless distance. In the case of my newfound marriage assistant and spread-spectrum LAN, it need reach only across my home. In the case of my TV remote control, it need reach only across the room.

http://www.media.mit.edu/~nicholas/Wired/WIRED5-08.html (2 of 3) [28-4-2001 14:11:47]

WIRED 5.08 - Wireless Revisited

What this suggests is that wireless communication should be designed with the nature of the bits in mind. This issue is not wired versus wireless but the strength of the signal. It also means that you had better not sell short the landline phone company or makers of fiber optic cable. In the end, we have to remember that nature has provided us with only one radio spectrum, no matter how cleverly we choose to use it. In contrast, insofar as a single fiber is more or less equal to the whole RF spectrum, the bandwidth of fiber landlines is infinite, since we can keep on making more and more, running the factories three shifts a day, seven days a week. For this reason, the granularity of RF will get smaller and smaller, for more and more personal bits. A good example of small-grain RF is the scale and extent of a home wireless LAN. You'll like the freedom it affords, and it might even help your marriage. Next Issue: Redisintermediation [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.08, August 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-08.html (3 of 3) [28-4-2001 14:11:47]

WIRED 5.09 - Reintermediated

NEGROPONTE

Message: 51 Date: 9.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

Reintermediated

The new story of disintermediation is an old bits-and-atoms classic. The complex process of "things" has created a food chain of middlemen and wholesalers who import, export, warehouse, and redistribute physical items. For this reason, when you buy tomatoes for US$1.57 per pound, the grower gets less than 35 cents, while the rest goes to all the people in the middle (in the case of tomatoes, up to seven intermediaries may be involved). If you could buy direct, it would be a no brainer to split the difference with the farmer, which would no doubt please the both of you. In fact, this is how online retailing started. Boutique winemakers north of San Francisco could not attract the attention of large wholesalers, nor were they satisfied with limited local distribution. Enter the cork dork. Brothers-in-law Robert Olson and Peter Granoff, who refer to themselves as "propellerhead" and "cork dork," created Virtual Vineyards (www.virtualvin.com/), one of the first Web sites to retail anything, let alone wine. In theory, they run a no-inventory business by arranging to dropship wine directly to your home, while collecting a nominal fee for arranging the sale and handling the billing. But, wait a second. Why do I even need them? Why couldn't each vineyard run its own Web page and just agree on simple terms (full body, tannic, fruity, et cetera) and conditions (blend of grapes, use of oak, price per bottle, et cetera), so that a computer program could do the work of Virtual Vineyards, thereby cutting it out as well? Well, winegrowers could. And someday they will, albeit none too soon.

Death of a car salesman


The experience of buying an automobile is so unpleasant that experts uniformly agree that car salespeople should be "disintermediated." This is substantiated by the fact that automobilerelated Web transactions are expected to reach close to $1 billion this year. Car dealerships are not like supermarkets; you've already made most of your buying decisions when you enter
http://www.media.mit.edu/~nicholas/Wired/WIRED5-09.html (1 of 3) [28-4-2001 14:11:49]

WIRED 5.09 - Reintermediated

the showroom. It is in effect a factory outlet. For this reason, it's not hard to imagine buying directly from the factory. Automobile manufacturers would embrace this strategy aggressively, if it did not risk annoying the prime retail channel in the short term. Car salespeople are comforted by this reality, but they also know their days are numbered especially the young dealers, who won't be dead before it happens. They may be rude, but they're not dumb. They need to adopt a better attitude, become more pleasant, and focus on aftersales. The latter can be as silly as a birthday card or as serious as a warrantied house call. Therein lies the secret: as you are about to be disintermediated, reintermediate yourself by adding a new dimension of value. Typically, this is a service with some flavor of added personalization.

What bits have to learn from atoms


Unlike tomatoes or cars, real estate listings, stock quotations, and airline schedules are bits, easily and inexpensively shipped at the speed of light. Bits need no warehousing, and the cost to make more is effectively zero. For this reason, real estate agents, stockbrokers, and travel agents will disappear much more rapidly than food wholesalers or car dealers. In the case of travel planning, a great deal of hocus-pocus has been introduced - the purpose is to make it almost impossible for you or me to understand the jargon of airline reservations or the price changes, which are posted five times a day! As computer programs are developed to help normal people make their own reservations, the travel agents will need to learn something from the car salespeople. I may be nostalgic, but I recall that old-fashioned travel agents knew something about travel - many of them had actually traveled and had tried hotels. More important, they got to know their clients and could personalize their recommendations. "Nicholas, since you like the Okura in Tokyo and the Peninsula in Hong Kong, you'll love Raffles in Singapore." And I do. Eventually, computers will do that, too. But individualized service is certainly one way to keep a step ahead of being disintermediated; that is, to reintermediate.

Reintermediated publishing
The people who really ought to be disintermediated are publishers. Here I draw a distinction between magazines (of course) and books: the former sells context, and the latter sells content. The content side of the equation can and will go direct the fastest. Since books are physical things distributed largely through thousands of retail outlets that buy one or two copies at a time, you and I would have trouble distributing as well as Knopf. Otherwise, we really can do without them. But tilt. People will say, "I bought your book because Knopf published it." Knopf was the talent scout, the finishing school, the company whose judgment is trusted. Well, rubbish to that. Think
http://www.media.mit.edu/~nicholas/Wired/WIRED5-09.html (2 of 3) [28-4-2001 14:11:49]

WIRED 5.09 - Reintermediated

of the last three books you've read. Do you remember the publisher? You know the author and the title, as well as the book's color, shape, and thickness. But you're unlikely to recall which company published it. Whether you read Grisham or Goethe, you read the author, not the publisher. That's why traditional book publishers will slowly but inevitably disappear. Bookstores will vanish even sooner, as they bring almost no value over a Web site like Amazon.com. So who will remain? The answer is a new intermediary. One who - or that - tells you which books you are most likely to enjoy. Think of it this way. How many hours have you wasted on a book that was just not worth your time? I feel about reading a book the same way I feel about waiting for a bus. Having already invested time doing so, I feel I might as well amortize that time by spending a bit more, and a bit more, until the bus comes - no matter how late. The digital intermediaries may change that forever. I want them to. So do you. Next Issue: On Digital Growth and Form [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.09, September 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-09.html (3 of 3) [28-4-2001 14:11:49]

WIRED 5.10 - On Digital Growth and Form

NEGROPONTE

Message: 52 Date: 10.1.97 From: <nicholas@media.mit.edu> To: <lr@wired.com> Subject:

On Digital Growth and Form

Being digital has three physiological effects on the shape of our world. It decentralizes, it flattens, and it makes things bigger and smaller at the same time. Because bits have no size, shape, or color, we tend not to consider them in any morphological sense. But just as elevators have changed the shape of buildings and cars have changed the shape of cities, bits will change the shape of organizations, be they companies, nations, or social structures. We understand, for example, that doubling the length of a fish multiplies its weight no less than eight times. We know that suspension cables break after a certain length because they cannot support their own weight. We are almost clueless, however, about the fractal nature of the digital world and how it will change the shape of our environment. Yet the effect will be no less substantial than if we changed the force of gravity.

Cyberspace is not a tree


The most astonishing part of the Net is that nobody is in charge. Everybody knows this, but nobody really wants to believe it. The buck must stop somewhere. Surely somebody is in control. After all, football teams have captains, and orchestras have conductors. In fact, we take for granted some form of authority, some hierarchy, in almost everything. In childhood it comes from parents and teachers. In adult life it comes from bosses and government. While we may not always be pleased with where we stand in that hierarchy, at least we understand it. But sometimes the mere presence of a police car can cause traffic jams. The Net - a reliable system composed of loosely connected and imperfect parts that work because nobody is in control - shakes up all our centralist notions, and hierarchy goes away by example. Cyberspace is a lattice. If a part doesn't work, you go around it. The look and feel is suddenly much more biological, taking its character more from flora and fauna than from the unnaturally straight-line geometry in artifacts of human design. Picture the loose-V formation of ducks flying south.

But ducks don't run banks


http://www.media.mit.edu/~nicholas/Wired/WIRED5-10.html (1 of 3) [28-4-2001 14:11:52]

WIRED 5.10 - On Digital Growth and Form

Yes, many pieces of our world - work and play - do have a centralism to them. Hierarchy has its place. But even the most conservative centralist will agree that organizations have flattened, with considerably fewer levels between top and bottom. Mitsubishi Trading Company, for example, summarily removed an entire level of middle managers, and other firms are doing the same. In part this is due to a competitive market economy that demands streamlining. But in greater part it is because modern communications allow people to deal with more than seven others (plus or minus one). Add current-day management doctrine and you get even thinner social forms. Leaders distinguish themselves by what they do, not by where they sit - something which many politicians and industrialists have yet to note. The computer industry learned this with open systems, where competing with imagination proves far more profitable than doing so with locks and keys. A libertarian view of the world adds flatness to decentralism and concludes that large organizations, like the nation-state, are doomed. This is only half true. Instead I would liken the digital world to indigenous architecture, where local and global forces make for individualism and harmony at the same time. Each house on a Greek island is totally its own design, reflecting the ad hoc needs of various individuals over time. But common use of local materials building in stone and applying whitewash to reflect the heat - results in a collective order. As soon as you use steel and air-conditioning, however, the only way to protect that harmony is to legislate it, relying on zoning laws to do what nature did before.

Bigger and smaller at the same time


My gripe with the nation-state is that it is just the wrong size - it does not mesh with the digital form of the future. Most nations are too big to be local, and all nations are too small to be global. What the Net is doing is forcing all of us into a body of law we do so badly - international law. Law of the sea, nonproliferation treaties, and trade agreements take forever to negotiate and are hard to maintain because nobody's primary self-interest is that of the world as a whole. As soon as there is a means and mind-set to be global, governance should be pushed down into the village and up onto the planet. We see this happening to a limited extent if we look simultaneously at the business and political worlds. Economic forces are pushing toward a regionalization of commerce, and political forces are tending toward the breakup of nations. Bigger and smaller. Businesses will do the same. Companies like Time Inc., News Corp., and Bertelsmann keep getting bigger and bigger. People worry about control of the world's media being concentrated in so few hands. But those who are concerned forget that, at the same time, there are more and more mom-and-pop information services doing just fine, thank you very much. The value of being big is twofold: size affords organizations the ability to deal with worldwide
http://www.media.mit.edu/~nicholas/Wired/WIRED5-10.html (2 of 3) [28-4-2001 14:11:52]

WIRED 5.10 - On Digital Growth and Form

physical space and the ability to lose lots of money in order to make a lot more. The value of being small needs no explanation. At this point in history, it is hard to imagine that our highly structured and centralist world will morph into a planetful of loosely connected physical and digital communities. But it will. For this reason, more and more attention needs to be paid to just how and how well we can coordinate this new mass individualization. It is, for example, easy to see who will build the road in my village. It is considerably harder to see who will connect our villages, especially if some have less wealth or control than others. It is also hard to see how we will agree on various standards. Think of it - we live in a world where we cannot even agree on which side of the road to drive. Next Issue: New Standards for Standards [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.10, October 1997.]

http://www.media.mit.edu/~nicholas/Wired/WIRED5-10.html (3 of 3) [28-4-2001 14:11:52]

WIRED 6.10 - Being Anonymous

EGROPONTE

Being Anonymous T
he digital world is about personalization. While atoms are prone to be repeated as industrial-age artifacts of assembly-line manufacturing, bits can be easily changed to deliver a customized product. And the new age of individualization brings with it all kinds of personalized belongings. In the most trivial sense, it means more things like vanity license plates and monogrammed shirts - your logo, not Ralph Lauren's. In a deeper sense, personalization provides comfort, security, and self-esteem. It is the means by which we are understood and express ourselves as individuals. The benefits of being unique can be as mundane as getting greeted by name or as magical as ordering a full meal with nothing more than a nod. It can be as complex as a long friendship that allows another person to understand, for example, the difference between what you mean and what you say. You know they know and they know you know they know. That is personalization. Acquaintance is the tool humans use to draw inferences, to unravel ambiguities and fill in missing information. Knowing a person makes communication much easier. But if we are not careful, that knowledge will leak into unwanted places, and we will pay the price in lost privacy. And by privacy I mean not just a theoretical and God-given right, but an everyday need and convenience.

Privacy pirates
Humans like to delegate. (You cannot do everything yourself anyway.) And because modern society increasingly engages other people in our personal affairs, we knowingly and unknowingly trade off the risk of betrayal for the value of personal attention. In the case of some services, like the practice of law and medicine, the potential hazards of revealing facts about yourself are reduced by legal or ethical practices. By contrast, a highhttp://www.media.mit.edu/~nicholas/Wired/WIRED6-10.html (1 of 3) [28-4-2001 14:12:03]

WIRED 6.10 - Being Anonymous

society butler or upstairs maid is not bound by a professional code and is often the star witness in domestic disputes. Nobody knows you better than the person who has been serving your idiosyncrasies, filtering your information needs, running your bank accounts, or making your bed. Most of us don't mind the risk. The quality of life is so greatly enhanced by personalized service that we are willing to freely reveal a great deal about ourselves to many other people. It is important to note that several parties are usually involved, even in our inner circle of friends and assistants. Fortunately, no one person has a complete model of us, and it is hard for them to share the parts. I will even entrust a machine with much of the same personal information. This information, however, is much more easily shared among other computer and human agents. In fact, far too much of the information about me - my "digital self" - is not coming from me directly. It is being culled without my knowledge and used for things that have no direct benefit to me. It is being pirated for purely commercial purposes, turning my personal data from an asset into a liability. Junk email and telemarketing solicitations are increasingly frequent examples of what result from this hijacked and repurposed information - of how good can change so quickly to bad. Because digital buccaneers gather their information surreptitiously, all too much is wrongly inferred and not fact. If my credit card shows lots of charges at Japanese restaurants, it may mean I like sushi, or it may mean I have Tokyo-based business associates but hate Japanese food. American Express will never know which. I would be happy to tell them, of course, if there were any value to me in my doing so. In the meantime, I'll pass whenever I can on becoming a data sample every time I visit a Web site, thank you very much.

Being nameless
My wife and I keep a home in France. With the exception of the driver for Federal Express, nobody knows us, or even our name. The luxury of anonymity is just as extraordinary as the opposite extreme we enjoy elsewhere. (Keeping it, of course, is an art form.) And anonymity has lots of small benefits, especially when it comes to peace and quiet. In a physical place, unfortunately, you cannot have it both ways. In cyberspace you can. A lot is written about digital identity, particularly about using the Net to role-play, to pretend you are somebody other than who you are. Almost nothing is written about the value of being nobody - not somebody different, but nobody in particular. The power of digital anonymity first struck me watching an electronic community for people worried that their spouse might have Alzheimer's. Because of the anonymity afforded by the chat room, people were willing to ask questions they would never have addressed under other conditions - and to become part of the community.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-10.html (2 of 3) [28-4-2001 14:12:03]

WIRED 6.10 - Being Anonymous

A less moving example of the value of being nameless is ecommerce. How many times have you arrived at a site and not purchased something because you were asked to fill out a detailed questionnaire? Independent of worrying whether Ken Starr might subpoena your book-buying records, you don't respond because you just don't want to hear back from everybody who sells you something. When Amazon.com emailed some advertising after my first purchase, I asked that they stop - they have been terrific about honoring the silence I requested. This type of digitally responsible company deserves to be successful.

Anonymous payments
Sadly, not all merchants will be as respectful of your privacy, and there's no accepted way of making a payment without revealing your identity. Even smartcards have to reveal their identity in order to be secure. The conventional wisdom in the payments field places little value on anonymity: "Privacy," I repeatedly hear, is the fetish of ponytailed paranoids who have something to hide. Wrong. Digital privacy is a simple, practical matter, a necessary step so we can get on with ecommerce without creating an avalanche of unsolicited interruptions. The digital world is already too noisy I want anonymity for reasons of tranquility, not dishonesty. If done right, digital money is far better than cash. Beyond ease of payment, it could allow governments to eliminate money laundering, and let parents give children an allowance that can't be spent buying Penthouse. Furthermore, anonymous payment systems need not be symmetrical, as the physical world demands. You can pay anonymously, but retain the option to change your mind should you later need to prove that you paid. Still, on far more occasions than you can imagine today, you will want no identity in transactions. You will want to be nobody. Next: Pricing Our Future [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.10, October 1998.]

http://www.media.mit.edu/~nicholas/Wired/WIRED6-10.html (3 of 3) [28-4-2001 14:12:03]

You might also like