Professional Documents
Culture Documents
Nicholas Negroponte
WIRED Columns
6.12 Beyond Digital 6.11 Pricing the Future 6.10 Being Anonymous 6.09 One-Room Rural Schools 6.08 Contraintuitive 6.07 The Future of Retail 6.06 Bandwidth Revisited 6.05 Taxing Taxes 6.04 RJ-11 6.03 Toys of Tomorrow 6.02 Poweless Computing 6.01 The Third Shall Be First 5.12 Nation.1 5.11 New Standards for Standards 5.10 On Digital Growth and Form 5.09 Reintermediated 5.08 Wireless Revisited 5.07 Digital Obesity 5.06 2B1 5.05 Tangible Bits 5.04 Dear PTT 5.03 Pay Whom Per What When, Part II 5.02 Pay Whom Per What When, Part I 5.01 Surfaces and Displays 4.12 Laptop Envy 4.11 Being Local December 1998 November October September August July June May April March February January December 1997 November October September August July June May April March February January December 1996 November
4.10 Electronic Word of Mouth 4.09 The Future of Phone Companies 4.08 Building Better Backchannels 4.07 Object-Oriented Television 4.06 Who Will the Next Billion Users Be? 4.05 Caught Browsing Again 4.04 Affective Computing 4.03 Pluralistic, Not Imperialistic 4.02 The Future of Books 4.01 Where Do New Ideas Come From? 3.12 Wearable Computing 3.11 Being Decimal 3.10 2020: The Fiber-Coax Legacy 3.09 Get a Life? 3.08 Bit by Bit, PCs Are Becoming TVS. Or Is It the Other Way Around? 3.07 Affordable Computing 3.06 Digital Videodiscs: Either Format Is Wrong 3.05 A Bill of Writes 3.04 The Balance of Trade of Ideas 3.03 000 000 111: Double Agents 3.02 Being Digital: A book (p) review 3.01 Bits and Atoms 2.12 Digital Expression 2.11 Digital Etiquette 2.10 Sensor Deprived 2.09 Why Europe is So Unwired 2.08 Prime Time Is My Time: The Blockbuster Myth 2.07 Learning by Doing: Don't Dissect the Frog, Build It 2.06 Less Is More: Interface Agents as Digital Butlers 2.05 Bit by Bit on Wall Street: Lucky Strikes Again 2.04 The Fax of Life: Playing a Bit Part 2.03 Talking with Computers
http://www.media.mit.edu/~nicholas/Wired/ (2 of 3) [28-4-2001 14:08:10]
October September August July June May April March February January December 1995 November October September August July June May April March February January December 1994 November October September August July June May April March
2.02 Talking to Computers: Time for a New Perspective 2.01 Aliasing: The Blind Spot of the Computer Industry 1.06 Virtual Reality: Oxymoron or Pleonasm? 1.05 Repurposing the Material Girl 1.04 Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV
1.03 Debunking Bandwidth: From Shop Talk to Small Talk 1.02 The Bit Police: Will the FCC Regulate Licenses to Radiate Bits? 1.01 HDTV: What's Wrong With this Picture?
[Back to Nicholas Negroponte's Home Page | Back to the Media Laboratory's Home Page]
EGROPONTE
Beyond Digital S
ometimes defining the spirit of an age can be as simple as a single word. You may remember, for instance, the succinct (if somewhat cryptic) career advice given to young Benjamin Braddock, played by Dustin Hoffman, in the 1967 film The Graduate: "Plastics." "Exactly how do you mean?" asked Ben. "There's a great future in plastics," replied Mr. McGuire. "Think about it. Will you think about it?" Now that we're in that future, of course, plastics are no big deal. Is digital destined for the same banality? Certainly. Its literal form, the technology, is already beginning to be taken for granted, and its connotation will become tomorrow's commercial and cultural compost for new ideas. Like air and drinking water, being digital will be noticed only by its absence, not its presence. The decades ahead will be a period of comprehending biotech, mastering nature, and realizing extraterrestrial travel, with DNA computers, microrobots, and nanotechnologies the main characters on the technological stage. Computers as we know them today will a) be boring, and b) disappear into things that are first and foremost something else: smart nails, self-cleaning shirts, driverless cars, therapeutic Barbie dolls, intelligent doorknobs that let the Federal Express man in and Fido out, but not 10 other dogs back in. Computers will be a sweeping yet invisible part of our everyday lives: We'll live in them, wear them, even eat them. A computer a day will keep the doctor away.
Yes, we are now in a digital age, to whatever degree our culture, infrastructure, and economy (in that order) allow us. But the really surprising changes will be elsewhere, in our lifestyle and how we collectively manage ourselves on this planet. Consider the term "horseless carriage." Blindered by what came before them, the inventors of the automobile could not see the huge change it would have on how we work and play, how we build and use cities, or how we derive new business models and create new derivative businesses. It was hard, in other words, to imagine a concept such as no-fault insurance in the days of the horse and buggy. We have a similar blindness today, because we just cannot imagine a world in which our sense of identity and community truly cohabitates the real and virtual realms. We know that the higher we climb, the thinner the air, but we haven't experienced it - we're not even at digital base camp. Looking forward, I see five forces of change that come from the digital age and will affect the planet profoundly: 1) global imperatives, 2) size polarities, 3) redefined time, 4) egalitarian energy, and 5) meaningless territory.
Being global
As humans, we tend to be suspicious of those who do not look like us, dress like us, or act like us, because our immediate field of vision includes people more or less like us. In the future, communities formed by ideas will be as strong as those formed by the forces of physical proximity. Kids will not know the meaning of nationalism. Nations, as we know them today, will erode because they are neither big enough to be global nor small enough to be local. The evolutionary life of the nation-state will turn out to be far shorter than that of the pterodactyl. Local governance will abound. A united planet is certain, but when is not.
growth in mom-and-pop companies, private planes, homespun inns, and newsletters written about interests most of us did not even know humans have. The only value in being big in any corporate sense will be the ability to lose billions of dollars before making them.
Being prime
Prime time will be my time. We'll all live very asynchronous lives, in far less lockstep obedience to each other. Any store that is not open 24 hours will be noncompetitive. The idea that we collectively rush off to watch a television program at 9:00 p.m. will be nothing less than goofy. It will make sense only for sporting events and election results - and that is only because people are betting. The true luxury in life is to not set an alarm clock and to stay in pajamas as long as you like. From this follows a complete renaissance of rural living. In the distant future, the need for cities will disappear.
Being equal
The caste system is an artifact of the world of atoms. Even dogs seem to know that on the Net. Childhood and old age will be redefined. Children will become more active players, learning by doing and teaching, not just being seen and not heard. Retirement will disappear as a concept, and productive lives will be increased by all measures, most important those of self. Your achievements and contributions will come from their own value.
Being unterritorial
Sovereignty is about land. A lot of killing goes on for reasons that do not make sense in a world where landlords will be far less important than webmasters. We'll be drawing our lines in cyberspace, not in the sand. Already today, belonging to a digital culture binds people more strongly than the territorial adhesives of geography - if all parties are truly digital. Ask yourself about the basics, about water, air, and fire. Remember the game 20 Questions? You begin by giving a hint as to whether you are thinking of an animal, a vegetable, or a mineral. OK. I am thinking of none of them. I am thinking of 100111100010110001. Next: After six years of writing the back page, I have decided it is time to pass this prime real estate on to someone else, before I find myself on the wrong side of the Wired/Tired equation. I won't be gone too far and will appear at times in this and other parts of the magazine. Promise. [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-12.html (3 of 4) [28-4-2001 14:08:18]
[Previous] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.12, December 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html
EGROPONTE
Flatland
Young people, I happen to believe, are the world's most precious natural resource. They may also be the most practical means of effecting long-term change: Making even small opportunities for children today will make the world a much better place tomorrow. Frankly, I have almost given up on adults, who seem generally to have screwed things up despite the good work being done in many parts of the globe. So I am increasingly inclined to seek out ways for the 6- to 12year-olds of our planet to learn how to learn, globally as well as locally. Education, however, is the formal jurisdiction of national, state, and local bureaucracies. And trying to bring about change through various ministries, departments, or boards of education tends to be a highly politicized and, at
http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html (1 of 3) [28-4-2001 14:08:34]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html
best, slow-moving process. Some ex officio move is needed, something outside the official fabric of school that can do for learning what the Internet has done for communicating. Fortunately, there is a small, very basic step we can take today that will have a huge, lasting effect on tomorrow: Price local telephone calls at a flat rate. Though most people in the United States already enjoy flat rates, the same is not true, alas, in most of Europe or the developing world. Flat rates for local calls are universally employed in only 13 percent of the countries around the globe. Sure, nothing is ever simple, but this one is real close. Unfettered access to the Net is key to the future of education. And learning, whether it's face-to-face or at great distance, takes time. Yet metered, by-the-minute pricing fosters short-term thinking in the most elementary sense. Instead of encouraging children to explore, parents nervously watch the clock as soon as their kids log on. The incentive is to have your child spend less time learning, not more - something unimaginable with a book or a library. Ironically, the high cost associated with time spent on the Net is not from Internet access itself, which is generally flat rate, but from the local telephone bill. Metered billing has come about from, among other things, the historical limitations of circuitswitched voice networks. Telecommunications in most of the world has traditionally been a public utility, owned and operated by the government; people therefore assumed civil servants were providing the least expensive and most beneficial service. The benefits of increased telecompetition, of course, have now become clear. And as the pendulum continues to swing toward privatization around the world, national phone companies must dress up for the party. Yet in anticipation of being privatized, some telcos have raised local rates. And even in markets where new economic models have emerged with the growth of packet switching, some are arguing to price data on a per-packet basis. This is crazy - and exactly the wrong way to go for Internet users, who want and need low and fixed local rates. Mind you, I am not saying free or even unreasonably low. Fixed. Note to telcos: Take into consideration the cost of metered billing that you will now save by offering fixed rates. And give discounts for a second line. A lot of children will be better off for it.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-11.html
For the price of a few TV-like commercial breaks, people can now make free calls - local or long distance, wireline or wireless - thanks to the Swedish company GratisTel; Seattle's Network 3H3 offers similar ad-supported service.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html
EGROPONTE
Telecom paradox
The nations with the worst and most expensive
telecommunications today are precisely those that will pay the highest price in terms of development. In any given developing country, improving the quality and extent of new telecom infrastructure is perhaps the easiest problem to fix. The economics on the demand side are much harder, in large part because usurious billing schemes are imposed by local rgimes, whose leaders look upon telecom as a luxury to be taxed. Local calls in Africa, for instance, average US$2 per hour; phoning from one country to the next costs $1.25 per minute. But consider that many of these state-owned telcos are in nations that receive much of their income in hard currency - earned from such steep prices, among other things. This shortsighted approach, however, must change in favor of the long-term economic
http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html (1 of 3) [28-4-2001 14:08:41]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html
view.
Computer paradox
Computers keep getting faster, following Moore's frequently quoted law of doubling processing power every 18 months. Played backward, the law should read: At a constant speed, the cost of computers will be cut in half every year and a half. Manufacturing, of course, does not scale smoothly in reverse. But the potential for very low cost computers is wildly more than we have made of it. Why? Because inexpensive computing is a crummy business. The margins are too low and the economic model is that of a commodity, a prospect that frightens American business. US companies just do not know how to tackle the low end. And by "low end" I don't mean the much vaunted sub-$1,000 computer - I mean PCs that cost less than $100.
Education paradox
The most troublesome paradox - and the most difficult to change - is that of education itself. Developing countries look longingly at developed nations, with an eye toward copying their education systems. The sad truth, however, is that the Western notion of school stems from an industrial age in which the intellect of children is manufactured like Fords: Instruction is a serial, repetitive process driven by strict norms of curriculum and age. As my MIT colleagues Marvin Minsky and Seymour Papert are fond of pointing out, such schools are an extreme form of age segregation. Six-year olds study with 6-year-olds, until next year, when they study with 7-year-olds. Only schoolchildren with siblings get the real advantages of age integration. Mind you, this isn't just younger children learning from older ones - little brothers and sisters helping their older siblings with computers has become a hallmark of our day. Age integration is a fundamental change we need to consider as part of revisiting the concept of school.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-09.html
goal is not to boost national standards or to stem the population flow into urban areas, though these may be by-products. The mission is to learn a lot more about learning itself. In the process we may find new models of education that can be used in all parts of the world - rich and poor, urban and rural. The catch is access.
LEOpolitical learning
Low Earth orbit satellites, or LEOs, are the wave of the future. The first such system, Iridium, will be put into service in September with 66 satellites serving the world as a single telecommunications system. Think of it as a cellular telephone grid - but one where you are stationary and the grid moves. Iridium, conceived in the late '80s, is optimized for voice, not data, but in a few years it will be followed by a next generation of LEOs (Teledesic being the most celebrated) optimized for the Net. When that happens, suddenly, being rural does not matter. Being in the most remote part of the planet does not matter. In fact, such places are precisely where LEOs will not otherwise be saturated with urban traffic. By contrast, when you physically wire the world, remote places become the most expensive to serve. With LEOs, you have to cover the whole world in order for any single part of it to work - rural and remote access, in a sense, comes for free. In the next five years, LEOs will thus change the balance of access. With very low cost computers and some boldness in education policy, it will be possible to touch the lives of all children, including those in the poorest and most remote regions of the world. The right step to take now is to use whatever means necessary to reach as many one-room rural schools as possible - to learn today about learning tomorrow. These apparently forgotten schools, paradoxically, may provide the best clues for real change in education. The ideas above are in large part taken from the real plans of the 2B1 Foundation (www.2b1.org/), in cooperation with the Fundacin Omar Dengo in Costa Rica. Costa Rica is one of the few nations to seriously embrace computers in primary education; one-room rural schools make up 40 percent of the country's primary schools, serving nearly a tenth of the K-6 population. Next: Being Anonymous [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.09, September 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html
EGROPONTE
Contraintuitive
I finally switched to Wintel. And what angers me most about the move is that Apple forced me into it. One compelling reason for the change: With the Macintosh's shrinking market share, no start-up or single entrepreneur can feel confident basing his or her work on the Mac; even kids shy away from it, preferring what many children insist are more "serious machines." Another reason, well emphasized by the press, is the low and decreasing number of old third-party developers, and the utter absence of new ones. Yet another reason: Mac software and peripherals are far too few in number; even worse, when they do appear, it's much later than the competition. People concerned about tomorrow just cannot settle for the tools of yesterday. Finally, and unfortunately for Apple, its last-ditch efforts to leap back on the leading edge - the G3-based Power Mac and PowerBook; the iMac, "the Internet-age 'computer for the rest of us'" - are far too late, if not too little. "Pro, Go, Whoa"? No. I already think different. So, sadly, I switch away from a system that I used for almost 15 years, at least three hours a day, seven days a week, without ever once in all those years having read or even opened a manual. The nightmare begins.
Windows as a snowboard
Learning to snowboard is considerably harder if you know how to ski; in fact, the first day requires enormous humility from the otherwise seasoned skier. But after two or three days, your balance overcomes the "unnatural" counterintuitive moves you must make. By contrast, after six months I am still falling all over the slopes of Windows, in total disbelief at the collective complexity and unbelievable inconsistencies introduced by all parties involved. This is an indictment of not just Microsoft, but the entire community of software and hardware developers who have done such a bad job of making usable and explainable systems - so much so that I'm convinced, in dark moments, that some of this is purposeful. A slightly more charitable view follows.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html
Windows as a city
Driving in a city for the first time, you are completely dependent on road signs. And far too often the most important one is behind a fully blossomed tree, is unlit at night, has changed names without notice, or uses nomenclature that is understandable only if you know the city. If you are a resident, of course, you never notice these inconsistencies, because you don't use signs to navigate. You already know where you are, where you're going, and how to get there. Though some cities try to use universally recognized, "intuitive" road signs, the city of Windows certainly needs to be much more friendly to nonresidents. System designers take note: It is time to test-drive your grandmothers.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-08.html
cursor in Word, I enter into an argument about what I mean - it is so clever! Yet something as basic (to Mac users) as click versus doubleclick is not handled consistently. Puh leazzze. On the other hand, surely it is possible for software designers, in and outside Microsoft, to be consistent about such simple tasks as exiting, quitting, or closing a program - three words should not be license for three ways of doing the same thing. Also, of particular annoyance is the complexity of establishing a modem connection while on the road - huge effort is required to outsmart the smart dialer, which is so stupid as to assume you will dial long distance.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html
The Message: 60 Future Date: 07.1.98 From: <nicholas@media.mit.edu> of To: <lr@wired.com> Retail Subject:
EGROPONTE You enter a store. You see something you like. You write down the product name and manufacturer. You go home and order it over the Internet. As a result, you didn't have to carry it, you probably got a better price, and you may have avoided sales tax. The store in this scenario is merely a showroom. Have I just described the exception to tomorrow's retail, or the rule?
Rightfully chicken
For the most part, manufacturers of toys, cars, clothes, et cetera, seem less than eager to advocate that you disintermediate the middleman and instead buy directly from them. Though that would be more profitable for the producer and less expensive for the consumer, it would also alienate the single largest outlet for toys, cars, clothes, et cetera - the retailer. Still, consumers will inevitably provide the pressure for change. They will band together to buy cars as a fleet and at fleet prices. They'll organize by church group to buy Barbies directly from Mattel. In the digital world, consumers hold almost all the power, which is a nice change. What consumers don't do, entrepreneurs will, with megastores, auctions, and swap meets - all in cyberspace. And they will do so without paying any rent to anybody.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html
known and successful cyber services in the US and UK; check out, for one, www.groceries-online.com/). A number of things make grocery shopping so challenging. The next time you leave a supermarket, just take a look at your shopping cart and imagine those items coming to your home one by one. It would be both a traffic jam and a logistical nightmare, not to mention the clamor of the doorbell constantly ringing. At the same time, home delivery of all sorts of things is far from a historical oddity. When I was young, my mother would call the grocery store and say what she wanted. It would be delivered in minutes. So what is new? What has changed is that a great many of the staples you buy at the supermarket are now available elsewhere. And in the digital world, you may find considerable advantage in buying some of those staples directly from the manufacturer. This applies equally well to Pampers and Pabst Blue Ribbon.
Midnight express
The catch is, you're never home. More important, you are least likely to be home when packages are most likely to be delivered - that is, daytime. Among other things, we need to rethink the concept of a mailbox, originally conceived for letters, themselves a dying breed (other than bills). The mailbox of tomorrow ought to be a cubic yard, with the potential for refrigeration. Various schemes might further protect goods from the errant courier and provide receipts as needed. In terms of delivery, the empty streets of nighttime can be used to transport all the things that people buy over the Net. That is, after all, how your newspaper is delivered. And there is no reason for the morning news not to be accompanied by fresh bagels - media companies should note the opportunity to cook up a cobranded product called "The Daily Bread."
http://www.media.mit.edu/~nicholas/Wired/WIRED6-07.html
advised: The digerati don't need you any longer. And very soon everybody will be digital. Next: Contraintuitive [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.07, July 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-06.html
NEGROPONTE
Bandwidth Revisited
Even after several generations of being digital, the basics of bandwidth still cause all kinds of confusion: Where does it come from? How much do you need? What does it cost? It's hard enough to understand without the detours (sorry) caused by likening bandwidth to a highway - the onramps, the offramps, not to mention the roadblocks and tollgates. Rest assured that adding an ISDN or cable modem "fast lane" for your home PC does not solve all the problems. A more appropriate, if less concrete, likeness might be paranoia, because real and perceived bandwidth are widely separated, and you cannot tell how much of the problem is your own fault. The real frustration comes from the inability to speed up the process by taking any one or, for that matter, any number of measures. The World Wide Wait, as it is too often known, is a chain of many events mostly outside your control, the slowest link of which determines the verve of your connection. Worse, the slowpoke in the connectivity chain is hard or impossible to identify. Imagine waiting for a bus, not knowing how many people are in line, where you stand in that line, when the bus is coming, or how big it will be when it does. (Woops - just skidded into another roadway metaphor.) One of the best ways to deal with bandwidth is to understand it on its own terms - which is not easy.
Not as advertised
Bandwidth is the capacity to deliver bits, typically measured by how many you can transfer in one second. Newcomers often don't know the difference between bits and bytes - there are 8 bits in a byte, which happens to be enough to represent a single ASCII character, including standard Latin alphanumeric characters, punctuation, and most accents. Without going into the brutal details, suffice it to say that you would like any string of 1s and 0s you transmit to be the same as those that are received. This cannot be blindly guaranteed without spending some of that same bandwidth to deliver extra bits for the sake of checking, correcting, or, in the worst case, requesting that bits be resent. When it comes to bandwidth, you're not getting all you think you are - a bit like the coverage of an insurance policy. But that's OK, if you listen to the press, because a lot more bandwidth is coming, though nobody is sure exactly when. The 16 million-plus miles of optical fiber found in the US alone will soon have the capacity to carry 400 billion bits per second, thanks to recent technology from Lucent (AT&T's former hardware house). The telex, by comparison, operated at 75 bits per second.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-06.html
video is fat: You read (the Latin alphabet) at about 600 bits per second and you watch television at about 3 million bits per second. A picture is not a thousand words, but more like a million.
Clogged pipe
I remember graduating from a 110-bits-per-second modem to 300 bps and feeling the astonishment of speed. Later, 1,200 bps felt like a miracle and 9,600 was lightning. But thereafter, for me, it came to a thud, though I continued to get more bandwidth there was even a time when I had more Internet bandwidth coming into my home than the entire nation of Switzerland. That fat pipe now feels empty simply because other forms of congestion get in the way. Communications software has become obese - the comfort of the saddle has separated us from the horse. Intercountry links are slow, often purposely so, because the self-interest of governments or telcos (often the same) are not well served otherwise. Servers are swamped, because bandwidth is also an issue inside a computer, as in the speed at which a processor can talk to memory.
NEGROPONTE
Taxing Taxes
After discovering the basic principle of electromagnetic induction in 1831, Michael Faraday was asked by a skeptical politician what good might come of electricity. "Sir, I do not know what it is good for," Faraday replied. "But of one thing I am quite certain - someday you will tax it." Little did he know how right he was, though more than a century would pass before the word bits existed. The idea of taking a tax bite out of digital communications comes courtesy The Club of Rome, specifically Arthur Cordell and Ran Ide's 1994 report "The New Wealth of Nations." More recently, redistributing the benefits of the information society has been championed by influential economist Luc Soete, director of the Maastricht Economic Research Institute on Innovation and Technology. Despite their repute, supporters of such a bit tax are clearly clueless about the workings of the digital world.
Tax bytes
A typical book contains about 10 million bits, which might take even a fast reader several hours to digest. By contrast, typical video - digital and compressed - burns through 10 million bits to produce less than four seconds of enjoyment. A bit consumption tax, in other words, makes no more sense than tariffing toys by the number of atoms. Maybe the information highway metaphors have gone to the heads of digitally homeless economists, who think they can assess value by something akin to counting cars. Of course, collecting taxes can be tough enough without trying to assess something you can't see, especially when you don't know where it is going to or coming from. This helps explain why the Clinton administration in late February reaffirmed its commitment to making cyberspace a global freetrade zone. The policy's purpose, the brainchild of White House senior adviser Ira Magaziner, is both economic stimulus and practicable fairness. So whether or not Congress has kept its promise to vote on the related Internet Tax Freedom Act by early spring, the legislation has the full force of careful deliberation - and historical inevitability - behind it. For these and other reasons, Europe abandoned the bit tax. But the idea still survived three and a half years of consideration, despite the growing awareness that bits by their very nature defy taxation.
Net a free-trade zone works for the US federal government. The Treasury derives most of its revenues from personal and corporate income taxes. If the economy sees a boost from any form of free trade, the Feds will see a proportionate rise in their own intake. Simple arithmetic. However, many countries and most states don't work that way. Instead, a sales tax is the means often the principal means - of filling government coffers. Ohio governor George Voinovich, chair of the National Governors' Association, declared that the Internet Tax Freedom Act "represents the most significant challenge to state sovereignty that we've witnessed over the last 10 years." Both he and the act may be right. The sales tax is also particularly popular among bureaucrats in developing nations, where collecting income tax is even harder because the poor make so little and the rich can avoid so much. Plus, the sales tax turns retailers into a nationwide web of tax collectors. And the tax is "fair" because it's based on what you spend versus what you earn. Still, Voinovich and company would be smart to start looking elsewhere, because their receipts will plummet as we buy more and more online, especially if what we buy are bits.
Jurisdiction in jeopardy
But the most taxing aspect of cyberspace is not the ephemeral nature of bits, the marginal cost of zero to make more of them, or that there is no need for warehouses to store them. It is our inability to say accurately where they are. If my server is in the British West Indies, are those the laws that apply to, say, my banking? The EU has implied that the answer is yes, while the US remains silent on the matter. What happens if I log in from San Antonio, sell some of my bits to a person in France, and accept digital cash from Germany, which I deposit in Japan? Today, the government of Texas believes I should be paying state taxes, as the transaction would take place (at the start) over wires crossing its jurisdiction. Yikes. As we see, the mind-set of taxes is rooted in concepts like atoms and place.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-05.html (2 of 3) [28-4-2001 14:09:13]
With both of those more or less missing, the basics of taxation will have to change. Taxes in the digital world do not neatly follow the analog laws of physics, which so conveniently require real energy, to move real things, over real borders, taxable at each stage along the way. Of course, even analog taxation without representation is no tea party.
Getting physical
Looking ahead, taxes will eventually become a voluntary process, with the possible exception of real estate - the one physical thing that does not move easily and has computable value. The US has a jump-start on the practice, in that 65 percent of local school funds come from real estate taxes - a practice Europeans consider odd and ill advised. But wait until that's all there is left to tax, when the rest of the things we buy and sell come from everywhere, anywhere, and nowhere. Next: Bandwidth Revisited [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.05, May 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html
EGROPONTE
RJ11
In telecommunications parlance, the "last mile" is endlessly debated in terms of wired versus wireless, symmetry versus asymmetry, and bandwidth needs - real, perceived, or actually used. This story is about the "last centimeter" - its bad design, unreliability, and public absence when you really need it. Think of it: the lowest common denominator in being digital is not your operating system, modem, or model of computer. It's a tiny piece of plastic, designed decades ago by Bell Labs' Charles Krumreich, Edwin Hardesty, and company, who thought they were making an inconspicuous plug for a few telephone handsets. Not in their wildest dreams was Registered Jack 11 - a modular connector more commonly know as the RJ-11 - meant to be plugged and unplugged so many times, by so many people, for so many reasons, all over the world.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html
machines than handsets in most countries. In any case, there is little likelihood that this physical standard will be replaced by anything other than wireless connections - the usability and reliability of which is a whole separate story. Suffice it to say that most people will be plugging in for a long time. Since we'll have to live with the RJ-11 for a while, we can surely make it easier to use than it is now.
Dongling participles
Dongle supposedly comes from the verb to dangle. If you do not have one, consider yourself lucky. I travel with four. A dongle is a hardware key and cable assembly that attaches to an external port; one of mine takes the otherwise solid female part of an RJ-11 and introduces flimsiness and delicacy to map the thin profile of a PCMCIA card - what a really dumb name - into the roughly square form factor of the RJ-11. My advice to anybody planning to purchase a laptop: don't buy one that does not have a built-in RJ-11. If you do, you are simply adding another point of weakness in your connectivity and will in all likelihood find yourself with the wrong dongle just when you need it.
Airport dilemma
One reason to join airline clubs is to have access to RJ-11s - and, often, free local phone service. This is fine for those who can afford a membership, and if the airport you happen to be in at a given time has a club with RJ-11 jacks. Otherwise, you are too often captive to a national public phone system that seems not to have heard of data communications. With the exception of a rare AT&T pay phone, which looks like a pregnant Sega game, your only hope is an acoustic coupler. But this is yet another thing to carry - and it's not particularly reliable at that. Surely we can build more pay phones with RJ-11 jacks. In fact, an RJ-11 only pay phone would not need a keypad, credit card reader, or coin slot; your PC would send the number and billing data. This would be the least expensive "phone booth" ever made.
Hotel malice
In some countries, especially those in western Europe, phones are still hardwired into the wall. In others, phones might use any one of nearly 200 phone jacks. Still, more and more places are accommodating or switching to the RJ-11 in the wall, in the phone, or as an auxiliary jack in the handset - the latter being the most appropriate in a hotel room. Some hotels still don't have such auxiliary jacks in the handsets, offering the lesser convenience of the RJ-11 in the wall. But because hotel managers also have learned that constant use breaks the clip, many cut it off, making the plug a onetime "permanent" connection, never to come out again. That is inexcusable. Even the most benign digerati will use anything from a penknife to a corkscrew to reopen the jack, the effect of which is well
http://www.media.mit.edu/~nicholas/Wired/WIRED6-04.html
deserved but devastating. Get with it, hotels. I was thrilled to see that the latest Zagat hotel guide includes a ranking of computer friendliness. About time.
Getting it straight
Yet even if you are lucky enough to get a room with an easily removable, seemingly usable RJ11 jack, don't be surprised if it does not work - i.e., there's no dial tone. Though the plug itself has become fashionable, in some cases the wiring is not consistent, especially in small telephone exchanges. The RJ-11 module has up to six wire conductors, but a simple phone connection needs only two. And while most of the world agrees on which two to use, just enough places (usually hotels, alas) don't. This is one of those exasperating instances in which we cannot even agree which way is up. As best I can tell, it's a 50/50 bet as to whether you will find the clip on the top or the bottom sometimes it is even set sideways. (The problem isn't just with technicians installing hotel wiring: two models of PowerBook had it one up and one down.) While this may seem to be nitpicky, the problem is - literally - more than meets the eye. Because RJ-11 sockets are often sufficiently recessed that you cannot easily see the jack's orientation, you have to use trial and error - and error does the plug no good. So the next time you travel, the next time you connect, think about this critical little piece of plastic. Don't you wish someone would make an unbreakable connector, even one priced as high as US$100? Maybe it is time for designers across the board to agree that RJ-11 clips should go on the top. There's no real reason to prefer the top to the bottom, but if we all did it one way, over time we might just get it straight. Next: Taxing Taxes [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.04, April 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html
EGROPONTE
Message: 56 Date: 03.1.98 From: <nicholas@media.mit.edu> Toys of To: <lr@wired.com> Tomorrow Subject:
Why would Professor Michael Hawley swallow a computer? Because he plays. He plays the piano. He plays hockey. He plays with ideas. In fact, he plays with notions like running the Boston Marathon with a radio transmitter pill inside his stomach, from which his core body temperature measurements would be broadcast to any and all media willing to listen (ttt.www.media.mit.edu/pia/marathonman/). The wild, the absurd, the seemingly crazy: this kind of thinking is where new ideas come from. In corporate parlance it's called "thinking out of the box." At the MIT Media Lab, it's business as usual. The people capable of such playful thought carry forward their childish qualities and childhood dreams, applying them in areas where most of us get stuck, victims of our adult seriousness. Staying a child isn't easy. But a continuous stream of new toys helps.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html
be worlds apart. But are they? When a young child plays with a toy, the interaction can be magic. Toys unlock that magic part in the toy and part in the child's head. Toys are the medium and the catalyst of play. Recognizing the power of play, Hawley and company are fundamentally rethinking toys, exploring the convergence of digital technology and the toys of tomorrow - another case where bits and atoms meet. Computers have changed almost all forms of work. And, since play is the work of children, it is time to revisit the tools of their trade.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-03.html
Hawley and others at MIT have been making new friends around the world to help invent toys. Their new business partners these days include Lego, Disney, Mattel, Hasbro, Bandai, Toys "R" Us, and others. Their other playmates are computer, communications, and entertainment companies like Intel, Motorola, Deutsche Telekom, Nickelodeon, and, believe it or not, the International Olympic Committee. Never before have the world's leading toymakers, technology companies, and sports organizations collaborated in such a way - which is just terrific, because the new world of digital toys won't be invented by any one group. Nobody is quite sure what will turn up on this new road to invention. The program just started. Stay tuned. But one thing is clear: Toys of tomorrow will carry some of the most awesome and inspiring technology humankind has yet created and place it in the hands of children. Where it belongs. Think of it this way. Being "wired" does not mean becoming "computer literate" any more than driving an automobile requires becoming "combustion literate." The power of toys is that they reach back to and shape the earliest years in our lives. One day, our grandchildren will naturally assume that teddy bears tell great stories, baseballs know where they are, and toy cars drive themselves with inertial guidance. Lucky them. Next: RJ-11 [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.03, March 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html
EGROPONTE
Message: 55 Date: 02.1.98 From: <nicholas@media.mit.edu> Powerless To: <lr@wired.com> Computing Subject:
Until last year, my biggest PC power problem was the inconvenience of carrying around four to six spare batteries. At least, I told myself, it's good exercise. Two very different circumstances made me rethink powering computers. The first was a story in the February 28, 1997, issue of The Wall Street Journal. PCs, the paper reported, are mostly used like "potted plants," yet according to the Sierra Club, this wasted resource can account for 200 pounds of carbon-dioxide pollution every year - about 2 percent of what's emitted by a car that is "actually doing something." Turns out the story was just plain wrong. A desktop computer running continuously requires less than half a percent of the energy used to power a car (ditto its carbon production). And a laptop can reduce energy consumption to less than 10 percent of what's used by a typical microcomputer. My second encounter came in July, at a gathering hosted by the Media Lab and the 2B1 Foundation. Participants from 45 developing nations spent six days sharing ideas and experiences about introducing, against all odds, computers to Third World education. At first glance, the odds seemed stacked against them three to one: the high cost of computers; the low availability of connectivity (affordable or otherwise); and the arrested development of educational theory, practice, and politics. Another challenge, however, proved even more basic: power. In the poorest countries, some schools and most homes don't have any. In fact, more than one-third of the world's population is without electricity. One of the 2B1 participants, Peter Patrao from India, offered a simple solution: bicycles. Vigorously riding a bike generates about 100 watts. The image conjured by Patrao's classroom is certainly a cute one - think of half the class pedaling while the rest work on PCs, redefining, among other things, "recess."
http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html
Additional solutions to powering PCs ranged from car batteries to more imaginative ways of harnessing the wind and the sun. Then, in August, inventor Trevor Baylis, following his work on windup radios, reported success with a windup computer - clear progress toward curbing the PC's high powered appetite.
Power diet
A laptop's power supply gets eaten up by the display, the disk drive, and the circuitry, in that order. The display takes the biggest bite, typically 25-40 percent (and rising, as processors go to lower voltages). For a variety of technical reasons, backlit displays have so far provided the best contrast ratio and highest brightness. The power problem is that most of the light is lost in transmission: typically less than 10 percent gets through the flat panel. The rest is dissipated as heat. Still, an LCD uses five to ten times less power than a CRT. A reflective display, by contrast, uses almost no power, taking most of what it needs from ambient light. This is why most calculators and all wristwatches require only tiny power supplies. So far, nobody has achieved an active-display medium that can reflect light with sufficient contrast. Actually, one reflect-ive display does a pretty good job - paper. In fact, "digital ink" has made significant progress in labs. (See Wired 5.05, page 162.) Considerable power is also consumed by a disk drive, which is why drives typically spin down then start up as needed. As it happens, there are all sorts of other reasons to get rid of moving parts, a direction the industry is already pursuing. The rest of a laptop's power consumption comes from circuitry, which can be made very powerefficient with modest trade-offs in performance. With the exception of the display, then, industry trends do not necessarily fly in the face of lowpower computing. In fact, Intel and Toshiba have massive programs under way in so-called flash memory, which uses no power to hold onto data, a little to read it, and a little more to write it. In short, making a very power-efficient computer - one that uses only a bit more than your wristwatch - is not quite as pie in the sky as you imagine. Some of the issue is just fire in the belly.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-02.html
battery, which then drives an electronic timepiece. These would have been perfect for notorious British publisher Bob Maxwell, who once told me that the last time he did any exercise was when he wore a watch that needed winding. Thad Starner, an MIT PhD student, has studied human-powered computing in some detail and with considerable self-interest - he has worn his computer for more than five years. Though the human neck can generate considerable heat, Thad has concluded, locomotion is the best source of power. He estimates that 5-8 watts can be recovered from walking. A great deal of body energy, in other words, is simply dissipated, like waves crashing onto a beach. Recovering just a bit of it could be quite important for "effectively" powerless computing.
Combustion?
At the turn of the century, steam engines provided about 80 percent of the total capacity for driving machinery. Today, most office equipment uses electricity, which in the US alone accounts for a $2.1 billion energy bill, not counting air-conditioning. That's wildly out of scale with the developing world, especially the poorest countries in, say, Africa where per capita power consumption is 5 percent of ours. The answer may be combustion. People are making serious progress in putting a fuel-burning, microelectromechanical engine on a chip. Butane, for example, has very high energy density. With an onboard generator as a means for generating electricity, you might one day simply fill your laptop with gas if you're tired of pedaling. Next: Toys of Tomorrow [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.02, February 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED6-01.html
THE
http://www.media.mit.edu/~nicholas/Wired/WIRED6-01.html
organization's energy will be directed into telecommunications infrastructure - not for phones, but for access to the Internet. This is where most of the billion users will come from by December 31, 2000. Celestial coincidence The planets of change seem to be lining up. Take the privatization of telephone companies. Competition (not to mention technology) has proven that costs will plummet: sooner or later, every civilized place will have a low and fixed rate for unlimited local calls. This will completely change how children use the Net. Yet, telephone rates are the most expensive precisely where they should be the cheapest - in the developing world. It is time to take celestial intervention quite literally. A combination of geostationary and low-Earth orbiting satellites - GEOs and LEOs - can and will change Internet usage in the ROW, especially for the more than 2.5 billion people who live in poor, rural areas. GEOs are interesting because many of the orbital slots over places like Africa are underused, unused, or, frankly, wasted on broadcast systems. A 1 meter dish, of course, could make all the difference for a remote developing world school. That's now within reach, thanks to companies like Tachyon, which will soon sell a turnkey satellite link for US$2,700 - a price that promises to drop to $1,500 by the end of 1999. In the long run, LEOs are even more interesting. The first LEO, Motorola's Iridium, will start service before the end of '98; its 66 satellites will circle the planet and, at least initially, be underutilized in developing countries. It is not hard to imagine the same satellite that was designed for an affluent, roaming cell-phone user being used by a poverty-stricken, stationary child - for bits. In the past five years, developed nations have jockeyed for position in the digital world. Finland and Sweden are well in the lead in Europe, while their neighbors, France and Germany, have fallen increasingly far behind. In other words, the "Third World" five years from now may not be where you think it is. There have been many theories of leapfrog development, none of which has yet survived the test of time. That's about to change.
Nicholas Negroponte (nicholas@media.mit.edu), founder and director of the MIT Media Lab, is senior columnist for Wired. [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.01, January 1998.]
http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html
NEGROPONTE
Nation.1
In February 1995, the European Commission hosted a G7 roundtable on the information society. Envoys ranging from heads of state to prominent industrialists debated the Global Information Infrastructure. The Japanese delegation included, among others, Isao Okawa, chair of Sega. His participation was quiet, but his return to Japan was not. He was determined to correct what he saw as a glaring omission: the people most affected by the coming information society - that is, children - were utterly unrepresented. He decided to change this. Within eight months, Okawa conceived, funded, and implemented the first Junior Summit. For four days in Tokyo that October and November, 41 children from 12 nations convened for a milestone meeting at which adults found room only in the audience. The young people, 12 to 18 years old, addressed issues ranging from the environment and peace to communications; some participants were involved in using the Internet to compose music collaboratively and perform it live for the first time. The event was a resounding success. Now, some two years later, Okawa is determined to see another such assembly take place in a broader international setting. To this end, MIT has been asked to host the second Junior Summit in 1998, under the direction of Media Lab professor Justine Cassell.
Agents of change
The second Junior Summit presents a chance to increase the number of countries at the table, to give the conference participants more time for discussion, and to let them disseminate their conclusions more widely. Children from every country in the world are invited to discuss the future of young people in the digital age. Of course, linking children around the world will not in itself solve the problems of world hunger, poverty, and repression. However, children together may make a step toward solving these problems and others that we adults are not child enough to recognize. The simple act of uniting children will widen their perspectives on their own lives, and the lives of those who come after. It will deepen their understanding of their own problems, and the problems of those who are unlike them. It will lead to a better world, as children become
http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html (1 of 3) [28-4-2001 14:09:53]
http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html
empowered to seek solutions globally and implement them locally. For the adults around these children, this process of discovery can enlighten all efforts to make the information society everybody's society. The second Junior Summit seeks to engage 200 children between the ages of 10 and 16 from around the world. Participants will be selected based on how well they can document - in their native language, through a video or photographic essay, through a piece of music, or through drawing or painting - the state of children in their community, with particular focus on how the digital revolution is affecting them. Those children who do not yet have anything to document with respect to the digital revolution are asked to give their vision of a global community. The 200 selected children will meet online for six months of debates, discussions, and the creation of artistic works. Simply participating in the online forum will allow children to be agents of change in their communities - all of those who are chosen will be given computers and Internet connections, which will be set up in their local schools or community centers. After six months online, the participants will choose 60 delegates to represent them at the summit at MIT, where they will solidify their positions and, finally, present their arguments to world leaders. Following the summit, children will be matched with mentors from industry, government, and education who will help them launch local action projects to share the benefits and continue the momentum of the summit.
A better world
One topic on the table will be a proposal by five alumni of the first Junior Summit to start Nation.1 - a virtual nation for children, with its own voice, flag, and currency, but without borders or centralized government. This nation would apply for membership to the UN and make every effort to include children from developing nations. Here is an excerpt from Nation.1's first proclamation: As a kid growing up with computers, you have ideas, you see possibilities, but they don't count, you're just a kid. Adults need kids, they just don't realize it. They can't relate to what kids have to offer, because they don't understand technology the way kids do. Kids have valuable perspectives, but the world offers no mechanism to voice their opinions. They have no representation in world politics and they have no influence in the decisions that govern their future. So with the help of the second Junior Summit, a group of young, very wired individuals is going to bend, twist, and distort some barriers with the hope those barriers will come undone. We are going to create a country in cyberspace, not defined by geography or race, but by technology and age: Nation.1 - a country populated and run by kids. Nation.1 is just beginning, and we are considering how to create digital political systems, how
http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html (2 of 3) [28-4-2001 14:09:53]
http://www.media.mit.edu/~nicholas/Wired/WIRED5-12.html
to deal with language barriers, how the technology behind the country will work. We passionately believe it's worth it, because uniting kids changes their perspective, widens their understanding, and leads to a better world. Proposals like Nation.1 may seem outrageous, even unthinkable, compared with what we adults would have suggested. That's the way it should be: ultimately, the world must go past what adults believe will succeed. The global information society is ours only to dream - it will be up to these children to live it out. If you are 10 to 16 and interested in the Junior Summit, check out www.jrsummit.net/, or write to Junior Summit, MIT Media Lab, Cambridge, MA 02139. For further information on Nation.1, email nation1@2b1.org or visit www.2b1.org/nation1/. This column was cowritten with Justine Cassell (justine@media.mit.edu), professor in the Learning and Common Sense section of the MIT Media Lab and director of the Gesture and Narrative Language Group. Next Issue: Powerless Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.12, December 1997.]
NEGROPONTE
I used to think that anybody who worried about standards was boring (perhaps because they were). Now I seem to be one of them. One of the biggest problems any traveler has with laptop computing, especially in Europe, is the plugs. Europeans have more than 20 different formats, with the only semblance of a standard coming from that offered to power an electric razor. There is actually a committee addressing the so-called Europlug - some estimates range upward of a quarter century before such a standard can be implemented, if found, and then only at huge cost. Atoms require enormous effort. Agreeing on physical form is very hard. This is not limited to the manufacturing specifications for metal and plastic machinery. The first two months of the 1968 Vietnam peace negotiations in Paris were devoted to determining the shape of the table.
Modems do it right
Plugs don't handshake. Modems do. This process is not too different from dogs sniffing each other. Modems try their best to communicate at the fastest possible speed, using whatever common error correction they share. Today, some of this is controlled in software; tomorrow all of it can and will be. The reason this works is simple: people have agreed on headers and metadescriptions. In other words, the standard is about how you will describe yourself, not what you are. This is important not only for massive globalization, but also for upgrading and future change. TV of the future - at least in the US, thanks to the Federal Communications Commission - will be flexible. In spite of the broadcast industry, the FCC refused to set anything but transmission standards. The result will be a slow blend of the Web (as kids know it) and TV (as baby boomers knew it). How a signal arrives, by land or by air; where it comes from, near or far; and what it looks like, a postage stamp or HDTV - all will be described in the signal, not decided by folks in Geneva or Washington, DC.
Higher standards
What the standards bodies need to do is turn their attention to some of the larger issues: while God may be in the details, a great deal needs to be said about the broad brush. The reason to make global standards is global communications. This means people communicating with people. And people have the biggest standards problem of all - they often don't speak the same language. If a Martian were to turn an ear toward our planet, conversations around the world would sound like modems unable to communicate with each other. In the face of today's digital globalization, it would be hard to explain the thousand-plus written languages and the scores of spoken dialects. On the other hand, people constantly question the digital dominance of English. Yet, as I like to remind them, we are glad that a French pilot lands an Airbus at Charles de Gaulle airport speaking English to the tower, as it means that other planes in the vicinity can understand. English as a second language, with or without computers, has become an international protocol of sorts and an accepted means of traffic control - even ship to shore. In the same way, English will continue to be the air traffic control language of the Net 10 years from now. But it will stop being the dominant carrier of content - English will be replaced by Chinese. Still, all sorts of other languages will flourish as well. I remember once defending small cultures and native tongues in these pages (see "Pluralistic, Not Imperialistic," Wired 4.03, page 216), only to be told by a reader that I got it all wrong. The issue, he said, was not
http://www.media.mit.edu/~nicholas/Wired/WIRED5-11.html (2 of 3) [28-4-2001 14:09:55]
English versus language X, but English versus ASCII. Boy, was he right. The ASCII standard is a huge problem, not the least of it being the insufficient number of bits for kanji characters or calligraphic fonts. In fact, without taking much note of this limitation, we have cemented ASCII into place in a far more entrenched fashion than English. We had better learn a lesson - and quickly. That lesson, however, is not to invent another Esperanto, but to realize that our bitstreams will be in different languages, which need some standard headers. Making the Net multilingual-ready is even more important than setting the metastandards for our modems and TVs. International bodies must recognize that a higher level of communications standard is needed to make sure that all languages are equally accommodated and self-descriptive. The 5 billion people not using the Net today have a lot to say. Kids know that. Next Issue: Jr. Summit [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.11, November 1997.]
NEGROPONTE
During the 1982 World Conference on Cultural Policies in Mexico City, Jack Lang, the French minister of culture, declared that "cultural and artistic creation is a victim of a system of multinational financial domination against which we must organize ourselves." Then he called the object of his tirade the product of "financial and intellectual imperialism." What prompted this outburst? The TV soap Dallas. That foolish program was so popular worldwide, it became a symbol of American cultural imperialism and a threat to European identity. More than 13 years later, last December, French President Jacques Chirac warned leaders of the world's 47 French-speaking nations that if English continues to dominate the information highway, "our future generations will be economically and culturally marginalized." Chirac declared that 90 percent of information transmitted on the Net is in English, and it threatens to steamroll French language and culture. Hello? Mr. Chirac, if anything is going to restore cultural identities, large and small, it is the Internet. I won't ask you what your forefathers did to the native language of Benin, the African nation where you made this proclamation. But I will remind you that the World Wide Web was invented in Switzerland, in the French-speaking part no less, and your own Minitel system is twice the size of America Online. The idea that the Net is another form of Americanization and a threat to local culture is absurd. Such conviction completely misses and misunderstands the extraordinary cultural opportunities of the digital world.
Berlusconi of the Net," Wired 4.01, page 78). When he discovered it, instead of grumbling about Netscape being in English, he created a multilingual browser and service already used or accessed by more than 500,000 people around the world. Video On Line validates the decentralist structure of the Net, especially in the European context, in which governments own the highly centralist telephone companies that dominate the continent's telecommunications with poor service and high costs. Colonialism is the fruit of centralist thinking. It does not exist in a decentralized world.
Germany will speak with a taxi driver in English. Similarly, the air traffic control standard is almost always English. This lingua franca should not be confused with cultural identity, nor should it be the basis for cultural-wars rhetoric. In fact, thank god we have the means to share an operational language. It is not the language of love, good food, and fine wine - it is certainly not the language of Voltaire - but it is a utilitarian language that lands planes safely and keeps the Net's infrastructure running. The Net is not produced and bottled in the US. In fact, more than 50 percent of Net users are outside the US, and that percentage is rising. By 2000, less than 20 percent of all Internet users will be in the US. So please, Mr. Chirac, stop confusing chauvinism with imperialism. The Net is humankind's best chance to respect and nurture the most obscure languages and cultures of the world. Your flaming is counterproductive to making our planet more pluralistic. Next Issue: Affective Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.03 March 1996.]
NEGROPONTE
Affective Computing
Roz Picard, a professor at MIT, believes that computers should understand and exhibit emotion. Absurd? Not really. Without the ability to recognize a person's emotional state, computers will remain at the most trivial levels of endeavor. Think about it. What you remember most about an influential teacher is her compassion and enthusiasm, not the rigors of grammar or science. Consider one of the simplest forms of affect: attention. Isn't it irritating to talk to someone as his attention drifts off? Yet all computer programs ignore such matters. They babble on as if the user were in a magical state of attentiveness.
Emotional intelligence
Unless it is used like film or music - essentially as a vehicle for human expression - expressive computing may strike you as over the edge. After all, isn't freedom from emotional vagaries one of the advantages of a computer? You certainly don't want to wait for your computer to become interested in what you have to say before it will listen. Should a computer be limited to recognizing emotions and yet be prohibited from having emotions? Too much emotion is clearly undesirable; we all know it wreaks havoc on reasoning. However, consider recent scientific findings regarding people who are essentially emotionally impaired (suffering from a tragic kind of brain injury). These people do not merely miss out on a luxurious range of feelings; they also lack basic rational decision-making abilities. The conclusion is that not enough emotion also impairs reasoning. Similarly, after decades of artificial intelligence efforts, unemotional, rule-based computers remain unable to think and make decisions. Endowing computers with the ability to recognize and express emotion is the first challenge; on its heels is a greater one - emotional intelligence.
For example, an affective steering wheel might sense you're angry (anger is a leading cause of automobile accidents). But what should it do? Prohibit you from driving while you, with escalating anger, rip out its sensors? Of course not. Emotional intelligence is a question of balance - a tutor reading emotional states and knowing when to encourage and when to let it rest. Until recently, computers have had no balance at all. It's time to recognize affect as a facet of intelligence and build truly affective computers. This column was co-authored with Rosalind W. Picard (rwpicard@media.mit.edu), NEC Career Development Professor of Computers and Communications at the MIT Media Lab. Next Issue: Caught Browsing Again [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.04 April 1996.]
NEGROPONTE
Wearable Computing
The digital road warrior's kit - laptop, cell phone, PDA, and pager - is just capable enough to bother you everywhere without necessarily helping you anywhere. It's absurd that each device is still on such poor speaking terms with the others. We walk around like pack horses saddled with information appliances. We should be in the saddle, not under it.
you in a Person Wide Web. How better to receive audio communications than through an earring, or to send spoken messages than through your lapel? Jewelry that is blind, deaf, and dumb just isn't earning its keep. Let's give cuff links a job that justifies their name. Footwear is particularly attractive for computing. Your shoes have plenty of unclaimed space, receive an enormous amount of power (from walking) that is currently untapped, and are ideally placed to communicate with your body and the ground. And a shoe bottom makes much more sense than a laptop - to boot up, you put on your boots. When you come home, before you take off your coat, your shoes can talk to the carpet in preparation for delivery of the day's personalized news to your glasses.
Cyborgs
Cyborgs are here already. No, this isn't a paranoid fantasy about intruders from the future. Two cyborgs have been roaming the Media Lab, wearing computers day in and day out for over two years. It's an uncanny experience teaching a course to Thad Starner, who is simultaneously watching you lecture and annotating the lecture notes behind you through Reflection Technologies' Private Eye, a wearable heads-up display (the same used in Nintendo's Virtual Boy). Steve Mann goes further, wearing a completely immersive system: movable cameras connect to a local computer and a transmitter to send video to a workstation for processing and delivery back to displays in front of his eyes. This lets him enhance what he sees (he likes living in a "rot 90" rotated world) and position his eyes. (Some days he likes having his eyes above his head, or at his feet, and when he rides a bicycle he sets one eye looking forward and one backward.) He can assemble everything he's seen into larger mosaics or 3-D images, and through the radio-frequency link you can see through his eyes (at http://wwwhttp://www.media.mit.edu/~nicholas/Wired/WIRED3-12.html (2 of 3) [28-4-2001 14:10:01]
white.media.mit.edu/~steve/netcam.html). Don't expect to see much computing featured in Bill Blass's next collection, but this kind of digital headdress will become more common. Bear in mind that 20 years ago, no publisher anticipated that teletype terminals would grow into a portable threat to books, that paper tapes would merge with film into multimedia CD-ROMs, or that telephones would threaten the whole business model of publishing by bringing the Web into your home. The difference in time between loony ideas and shipped products is shrinking so fast that it's now, oh, about a week. This article was co-authored by Neil Gershenfeld (gersh@media.mit.edu), an MIT professor and one of three co-principal investigators of the Media Lab's newest research consortium, Things That Think. Next Issue: Where Do New Ideas Come From? [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.12 December 1995.]
NEGROPONTE
Ideas come from people, obviously. But under what conditions are groups, corporations, and even nations most likely to foster new ideas? It's not an easy question. Many of the essentials of a fertile, creative environment are anathema to an orderly, well-run organization. In fact, the concept of "managing research" is an oxymoron. Setting short-term goals, then quickly testing to see if they will bear fruit is similarly absurd. Jerome Wiesner, former president of MIT and science advisor to President Kennedy, was fond of saying, "That's like planting a seedling and, a short while later, yanking it out to see if the roots are healthy." Ideas may come like thunderbolts, but it can take a long time to see them clearly - too long. And ideas are often born unexpectedly - from complexity, contradiction, and, more than anything else, perspective. Alan Kay, father of the personal computer (among other things), likes to say that perspective is worth 50 points of IQ (it may be worth more, Alan). Marvin Minsky, father of artificial intelligence, says that you don't know something until you know it in more than three ways. They're both quite right.
on silicon. CPUs get consistently faster for the same price, in more or less the same-sized package. Incrementalism works in this case, but as a function of local refinements, not big new ideas. On the other hand, being digital (sorry) is more global and cuts across most of life. Joel Birnbaum, the luminous head of research at Hewlett-Packard, calls future computing "pervasive": "something you do not notice until it is missing." Such research must look outward, because it's not just about the next-generation PC, it's about life. IBM and Intel, among others, have sometimes suffered from looking inward too much and growing only their own company's kind of people. None of these businesses would want Albert Einstein or Bertrand Russell in its labs - let alone running them - even though the presence of such minds would surely bring perspective and help dampen incrementalism. Companies just don't work that way.
Universities do
Research universities are a good example of a source of new ideas, but they are suffering from federal cutbacks and hence, looking for corporate support. Some faculty members and administrators complain that turning to industry for funding compromises their research, shackles researchers, and makes scholarship shortsighted - "prostitution" is a word I have heard mumbled. Boy, are they wrong. We are precisely at a time when universities can do exactly what corporations cannot do and the government should not do: foster and nurture new ideas. Let me qualify that. Government is not needed as a patron (the National Science Foundation could go away). But government may be a creative client, like a corporation, which in some ways is how the departments of energy, transportation, defense, and others work. Economic recession may be the best thing that has ever happened to university (as well as government) research, because companies have realized that they cannot afford to do basic research. What better place to outsource that research than to a qualified university and its mix of different people? This is a wake-up call to companies that have ignored universities - sometimes in their own backyards - as assets. Don't just look for "well-managed" programs. Look for those populated with young people, preferably from different backgrounds, who love to spin off crazy ideas - of which only one or two out of a hundred may be winners. A university can afford such a ridiculous ratio of failure to success, since it has another, more important product: its graduates.
place where it is out of place and stimulate ideas, shake up establishments, and don't take no for an answer. This poses an interesting challenge to any research organization: be even more nimble and supportive of the unconventional, tolerate more way-out and expensive ideas, and encourage the seemingly disheveled behavior of hacker life. In the pool of knowledge at a university, professors are not the fish, but the pond. The water is not chlorinated, clear, precisely circumscribed, and inhabited by one kind of perfect goldfish. It is a muddied habitat with fuzzy edges and home to all sorts of people, including those who do not fit traditional scholarship. That is where new ideas come from. Next Issue: The Future of Books [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.01 January 1996.]
NEGROPONTE
What weighs less than one millionth of an ounce, consumes less than a millionth of a cubic inch, holds 4 million bits, and costs less than US$2? What weighs more than 1 pound, is larger than 50 cubic inches, contains less than 4 million bits, and costs more than $20? The same thing: Being Digital stored on an integrated circuit and Being Digital published as a hardcover book. The most common question I get is, Why, Mr. Digital Fancypants, did you write a book? Books are the province of romantics and humanists, not heartless nerds. The existence of books is solace to those who think the world is turning into a digital dump. The act of writing a book is evidence, you see, that all is not lost for those who read Shakespeare, go to church, play baseball, enjoy ballet, or like a good long walk in the woods. Anyway, who wants to read Michael Crichton's next book, let alone the Bible, on screen? No one. In fact, the consumption of coated and sheet paper in the United States has gone from 142 pounds per capita in 1980 to 214 pounds in 1993.
wonder what will economically support so many sites (today, one homepage is added every 4 seconds), just think books. You say to yourself, Surely most of those Web sites will go away no way. There will be more and more and, like trade books, there will be an audience for all of them. Instead of worrying about the future of the book as a pulp standard, think about it as bits for all: bestseller bits, fewer specialty-seller bits, and no-seller bits for grandparents from grandchild. Meanwhile, some of us in research are working really hard to make them feel good and be readable - something you can happily curl up with or take to the john. Next Issue: Language on the Net [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.02 February 1996.]
NEGROPONTE
Being Decimal
Like dogs, laboratories age considerably faster than people. But while dogs age at a factor of seven, I would say labs age at a factor of 10, which makes the MIT Media Lab 100 years old last month. When we officially opened our doors for business in October 1985, we were the new kids on the block, considered crazy by most. Even The New York Times called us "charlatans." While I was slightly hurt at being referred to as "all icing and no cake," it secretly pleased me because I had no doubt that computing and content would merge together into everyday life. Now, 10 years later, "multimedia" is old hat. The term appears in the names and advertising jingles of some of the most staid corporations. But becoming part of the establishment is a lot less fun than experiencing the risk and abuse of pioneering. So, how does a lab avoid sclerosis? How do we move into high-risk areas of the future after receiving acclaim and recognition for our past? The answer is intimately tied to the nature of a research university - an institution that is both a liability and an asset when you're doing research. The liability is tenure, which guarantees lifetime employment for faculty, some of whom have long forgotten their creativity. The asset is students. I'm fond of telling people that I run a company with 300 employees and 20 percent turnover each year. But it's not just new faces. The incoming lot are always between 16 and 25 years old, even though the rest of us get older each year. That 20 percent churn is the fountain of youth.
catch a lot less. One has to know when it's time to find a new pond. No, we don't drop everything and start working on cold fusion or a method for turning lead into gold. The change is as much an attitude as anything else. For example, there are 30,000 Americans over the age of 100. When these centenarians were studied, researchers found that diet, exercise, and healthy living were not the common denominator or prime force behind their longevity. Instead, in reverse order of priority: they were successfully coping with loss, keeping busy, and maintaining a positive attitude. I believe the same holds true for a laboratory. In our case, we are (barely) coping with the loss of Jerome Wiesner and Muriel Cooper (see Wired 2.10, page 100), everyone is extremely busy, and our optimism is contagious.
telephone. Telephones should never ring. If you're not there, the ringing is useless. If you are there, you'd probably prefer the phone be answered by a digital butler. If that digital butler determines that the call should be passed through, perhaps the nearest object should alert you. And that might be a doorknob. You may or may not be convinced, but we are - sufficiently so to start a major new program called "Things That Think" on the occasion of our 10th birthday. An important component of the research is wearable computing. If this sounds silly to you, all the better. Ten years ago, "media convergence" was also considered silly. Tune back in when we are 20. Or rather, 200. Next Issue: Wearable Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.11 November 1995.]
NEGROPONTE
Sensor Deprived
When it comes to sensing human presence, computers aren't even as talented as those modern urinals that flush when you walk away. You can lift your hands from a computer's keyboard (or even between keystrokes) and your computer does not know whether the pause is for momentary reflection, or for lunch. We give a great deal of attention to human interface today, but almost solely from the perspective of making it easier for people to use computers. It may be time to reverse this thinking and ask how to make it easier for computers to deal with people. A recent Media Lab breakthrough by Professor Neil Gershenfeld solves a range of user interface problems with a few dollars of hardware. A varying electric field induces a small (nanoamp) current in a person that can be measured to locate the person in the field, making it is possible to build smart appliances and furniture that remotely and unobtrusively locate fingers or hands in 2-D or 3-D, bodies in chairs, or people in rooms. Another way for computers to sense human presence is through computer vision - giving machines the ability to see. Companies like Intel are now manufacturing low-cost hardware that eventually will lead to an embedded video camera above the screen of almost every desktop and laptop computer. This makes it possible for humans to telecommute and to collaborate visually from a distance. The computer could use that same camera to look at its user. Furthermore, machine vision could be applied to sensing and recognizing smiles, frowns, and the direction of a person's gaze, so that computers might be more sensitive to facial expression. Your face is, in effect, your display device; it makes no sense for the computer to remain blind to it. I am constantly reminded of the tight coupling between spoken language and facial expression. When we talk on the telephone, our facial expressions are not turned off just because the person at the other end cannot see them. In fact, we sometimes contort our faces even more to give greater emphasis and prosody to spoken language. By sensing facial expressions, the computer could access a redundant, concurrent signal that enriches the spoken or written message.
http://www.media.mit.edu/~nicholas/Wired/WIRED2-10.html (1 of 3) [28-4-2001 14:10:16]
High-Touch Computing
The dark horse in graphical input is the human finger. This is quite startling, considering the human finger is a device you don't have to pick up. You can move gracefully from typing (if typing has grace) to pointing, from horizontal plane to vertical. Why hasn't this caught on? Some of the limp excuses follow: - You occlude that which is beneath your finger when you point at it. True, but that happens with paper and pencil, as well, and has not stopped the practice of handwriting or of using a finger to identify something on hardcopy. - Your finger is low resolution. False. It may be stubby, but it has extraordinary resolution when the ball of the finger tip touches a surface. Ever so slight movement of your finger can position a cursor with extreme accuracy. - Your finger dirties the screen. But it also cleans the screen. One way to think about touchsensitive displays is that they will be in a kinetic state of more or less invisible filth, where clean hands clean and clammy ones dirty. The real reason for not using fingers is, in my opinion, quite different. With just two states touching or not touching - many applications are awkward at best. Whereas, if a cursor appeared when your finger was within, say, a quarter of an inch of the display, then touching the screen would be like the multi-states of a mouse click or data tablet. With such "nearfield" finger-touch, I promise you, we would see many touch-sensitive displays.
Eyes as Output
Eyes are classically studied as input devices. The study of eyes as output is virtually unknown. Yet, if you are standing 20 feet away from another person, you can tell if that person is looking
http://www.media.mit.edu/~nicholas/Wired/WIRED2-10.html (2 of 3) [28-4-2001 14:10:16]
right in your eyes or just over your shoulder - a difference of a tiny fraction of a degree. How? It surely isn't trigonometry, wherein you are computing the angle of the other person's pupil and then computing whether it is in line with your own gaze. No. That would require unthinkable measurement and computation. There is some kind of message passing, maybe a twinkle of the eye, which we just don't understand. We constantly point with our eyes and would find such computer input valuable. Imagine reading a computer screen and being able to ask: What does "that" mean, Who is "she," How did it get "there?" "That," "she," and "there" are defined by your gaze at the moment, not some clumsy elaboration. It makes perfect sense that your question concerns the point of eye contact with the screen and, to reply, the computer must know the precise point. In fact, when computers can track the human eye at a low cost, we are sure to see an entire vocabulary of eye gestures. When that happens, human-computer interaction will be far less sensor deprived and more like face-to-face communication, and be far better for it. Next Issue: Digital Etiquette [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.10 October 1994.]
NEGROPONTE
Digital Etiquette
Imagine the ballroom of an Austrian castle during the 18th century, in full gilded splendor, glittering with the reflected light of hundreds of candles, Venetian mirrors, and jewels. Four hundred handsome people waltz gracefully to a 10-piece orchestra. Now imagine the same setting, but with this change: 390 of the guests learned how to dance the night before, and they are all too conscious of their feet. This is similar to the Internet today: most users are all fingers. The vast majority of Internet users are newcomers. Most have been on it for less than a year. Their first messages tend to flood a small group of select recipients, not only with page after page, but with a sense of urgency suggesting the recipient has nothing else to do. Worse, it is so simple and cost-free to forward copies of documents that a single hit of the Return key can dispatch 15 or 50,000 unwelcome words into your mailbox. That simple act turns e-mail from a personal and conversational medium into dumping; it is particularly distressing when you are connected over a narrow link. Some of us who have been on the Internet or its predecessors for a long time (a quarter of a century, in my case) pride ourselves on being available. The e-mail address above is my real email address, and I make every effort to answer everything I receive. Therefore, I feel a right to be opinionated about its abuse as a communications medium. Netiquette is particularly important to me because I use e-mail during many hundreds of thousands of miles of travel each year, from foreign lands, in strange places, through weird positions (usually caused by an unfriendly telephone booth or hidden phone jack). One result is that I often see my e-mail at low and heavily error-prone bit rates. This strengthens e-character. One journalist commissioned to write about these newcomers and their inconsiderate use of the Internet researched his story by sending me and others a four-page questionnaire - without asking first and without the slightest warning. His story should have been a self-portrait. Common courtesy suggests a short introductory request - as opposed to the wholesale and presumptuous delivery of questions.
In general, however, e-mail can be a terrific medium for both the reporter and the reported. Email interviews are far more satisfying for people like me, because replies can be considered at leisure. They are less intrusive and allow for more reflection. I am convinced that e-interviews will happen more and more, ultimately becoming a standard tool for journalism around the world, provided that reporters can learn some manners.
Ugly Habits
Some of the ugliest digital behavior results from having plentiful bandwidth and using it with careless abandon. I am convinced that the best way to be courteous with alphanumeric e-mail on the Net is to assume the receiver of the message has a mere 1200 baud and only a few moments of attention. An example of the contrary (a habit practiced to my alarm by many of the most seasoned users I know) is returning a full copy of my message with a reply. That is perhaps the laziest way to make e-mail meaningful and it is a killer if the message is long (and the channel thin). It takes so little effort to weave referents into an answer or cut and paste a few relevant pieces. The opposite extreme is even worse, such as the reply "Sure." Sure, what? Similarly, the use of undefined pronouns is irksome when they refer to an earlier message. As distinguished from spoken conversation, e-mail has variable chunks of time (and space) between segments. The worst of all digital habits, in my opinion, is the gratuitous "cc" which, among other things, gives new meaning to the word "carbon." It has scared off many senior executives from being on-line. The big problem with electronic cc's is that they can multiply themselves, because replies are all too frequently sent to the entire cc list. If a person is organizing an impromptu international meeting and invites 50 people to attend, the last thing I want to see is the travel arrangements of the other 49.
manage my meetings. Consider the total time required for me to dictate a short letter (which I do sometimes), to have it typed, to proof it, to sign it, and to have it posted (or, forbid, faxed). The elapsed time is surely no less than 20 minutes of total human time (probably more). By contrast, I can answer the same by e-mail in less than 20 seconds. My e-mail box is not polluted. (This column may end that.) The reason, I believe, is that people really don't want to foul their own doorstep. At the Media Lab my e-mail responsiveness is a family joke: never more than a few hours, 365 days a year. People are careful not to abuse my accessibility, because it is like an open door. If there is too much noise outside, it is easy to shut it. Wired e-mail is usually considered and interesting, and I learn a great deal from it. (But often it is too long.) If you are a newcomer to this medium, remember that some others are not and may live and die by it. The best netiquette advice I can offer you is be brief. Next Issue: Digital Expression [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.11 November 1994.]
NEGROPONTE
Digital Expression
Jerome Wiesner, former president of MIT and co-founder of the Media Lab, tells a story about Vladimir Sworkin, who visited him one Saturday at the White House when Wiesner was John Kennedy's science advisor. He asked Sworkin if he had met the president. As Sworkin had not, Wiesner took him across the hall and introduced him as "the man who got you elected." Startled, the President asked, "How is that?" Wiesner explained: "This is the man who invented television." Kennedy replied, "How terrific. What an important thing to have done," to which Sworkin wryly commented: "Have you seen television recently?" Technological imperatives, and only those imperatives, drove the development of TV. Then it was handed off to a body of creative talent with different values and from a different intellectual subculture. Photography, on the other hand, was invented by photographers. The people who perfected photographic techniques did so for their own expressive purposes, fine-tuning the technology to meet the needs of their art. Means and messages were deeply intertwined. Personal computers have moved computer science away from the purely technical imperative and are evolving more like photography. Computing is being channeled directly into the hands of very creative individuals at all levels of society and is becoming the means for creative expression in both its use and development. The means and messages of multimedia will become a blend of technical and artistic achievement.
music recording). Or it can be considered from the perspective of musical cognition: how do we interpret the language of music, what constitutes appreciation, and where does emotion come from? Finally, music can be treated as artistic expression, with a story to be told and feelings to be aroused. The point is that all three are important in their own right and allow the domain music - to be the perfect intellectual landscape for moving gracefully between science and art. The traditional kinship between mathematics and music is multiplied manyfold within the hacker community, which tends to be musically inclined, if not gifted. Even if music is not a student's professional objective, it satisfies an often important need for avocation. This can be generalized because many avocations are needlessly subordinated by parental and social forces, when they could be vehicles for more meaningful, deeper learning. The concept of a hobby is subject to great change in digital life. While it is used to mean an extracurricular passion, in the digital world such hobbies can be part of the toys with which we think and the tools with which we play. The computer provides a complete range of points of entry to music and does not limit access to the prodigious child, nor to those who are sufficiently disciplined or genetically inclined.
What was once only an abstract concept - like math - now has a window into it that has many components from the visual arts. What this means by extension is that computers will make our future adult population much more visually literate and artistically able than today. Ten years from now, teenagers are likely to enjoy a much richer panorama of options because the pursuit of intellectual achievement will not be tilted in favor of bookworms but cater to a range of expressive tastes. "The Return of the Sunday Painter," the title of a chapter I contributed to The Computer Age: A Twenty-Year View more than two decades ago, is meant to suggest a new era of respect for avocations and a future with more active engagement in making, doing, and expressing. My belief in this comes from watching computer hackers, both young and old. Their programs are like paintings: they have aesthetic qualities and are shown and discussed in terms of their meaning from many perspectives. Their programs include behavior and style that reflect their makers. These people are the forerunners of the new expressionists. Next Issue: Bits and Atoms [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.12 December 1994.]
NEGROPONTE
When returning from abroad, you must complete a customs declaration form. But have you ever declared the value of the bits you acquired while traveling? Have customs officers inquired whether you have a diskette that is worth hundreds of thousands of dollars? No. To them, the value of any diskette is the same - full or empty - only a few dollars, or the value of the atoms. I recently visited the headquarters of one of the United States's top five integrated-circuit manufacturers. I was asked to sign in and, in the process, was asked whether I had a laptop computer with me. Of course I did. The receptionist asked for the model, serial number, and the computer's value. "Roughly US$1 to $2 million," I said. "Oh, that cannot be, sir," she replied. "What do you mean? Let me see it." I showed her my old PowerBook (whose PowerPlate makes it an impressive 4 inches thick), and she estimated its value at $2,000. She wrote down that amount and I was allowed to enter. Our mind-set about value is driven by atoms. The General Agreement on Tariffs and Trade is about atoms. Even new movies and music are shipped as atoms. Companies declare their atoms on a balance sheet and depreciate them according to rigorous schedules. But their bits, often far more valuable, do not appear. Strange.
litter our homes, and to fill garbage sites with their information business, as long as this information is in the form of atoms - paper hurled over the transom. But as soon as the companies deliver the exact same information with no-deposit, no-return, environmentally friendly bits, they have broken the law. Doesn't that sound screwy? Was anyone thinking about the meaning of "being digital" during the time that AT&T was being disassembled? I fear not.
Markoff-on-Production
It was through The New York Times that I came to know and enjoy the writing of computer and communications business reporter John Markoff. Without The New York Times, I probably would not have been introduced to him. However, now it would be far easier for me to collect
his new stories automatically and drop them into my personal newspaper or suggested reading file. I would be willing to pay Markoff 5 cents for each of his new pieces. If one-fiftieth of the 1995 Internet population subscribed to this idea, and Markoff wrote 20 stories a year, he would earn $1 million, which I am prepared to guess is more than The New York Times pays him. If you think one-fiftieth is too large a percentage, then wait awhile. Once someone is established, the added value of a distributor becomes less and less in a digital world. The distribution and movement of bits is much easier than atoms. But delivery is only part of the issue. A media company is, among other things, a talent scout, and its distribution channels, bits or atoms, provide a test bed for public opinion. But after a certain point, the author may not need this forum. In the digital age, WIRED authors can sell their stories direct and make more money, once they are discovered. While this does not work today, it will work very well, very soon - when "being digital" becomes the norm. Next Issue: Being Digital [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.01 January 1995.]
NEGROPONTE
When I agreed to write the back page for WIRED, I had no idea what it would entail. I encountered many surprises. The biggest by far was my discovery that the magazine readership included a wide range of people, not just those with an @ behind their name. When I learned that kids were giving WIRED to their parents as Christmas presents, I was blown away. There seems to be a huge thirst to understand computers, electronic content, and the Net as a culture - not just as a technology. For this reason, and with encouragement from many readers (both rants and raves), I decided to repurpose my WIRED columns into a book entitled Being Digital, which comes out the first of February. The idea sounded simple in June - but 20 stories don't necessarily string together into one book, even if they happen to be pearls. More important, so much has changed so quickly that the future-looking early stories have become old hat. To my surprise, one thing that held up from the beginning was that the columns used words alone - no pictures. That seemed to work. As one of the inventors of multimedia, I found it ironic that I never use illustration. Furthermore, as a believer in bits, I had to reconcile myself to the idea that my publisher, Knopf, would be shipping mere atoms around.
United States, copyrights and patents are not even in the same branch of government. Copyright has very little logic: you can hum "Happy Birthday" in public to your heart's delight, but if you sing the words, you owe a royalty. Bits are bits indeed. But what they cost, who owns them, and how we interact with them are all up for grabs.
Digital Life
Here is where my optimism may have gotten in the way; I guess I have too many of those O (for optimistic) genes. But I do believe that being digital is positive. It can flatten organizations, globalize society, decentralize control, and help harmonize people in ways beyond not knowing whether you are a dog. In fact, there is a parallel, which I failed to describe in the book, between open and closed systems and open and closed societies. In the same way that proprietary systems were the downfall of once great companies like Data General, Wang, and Prime, overly hierarchical and status-conscious societies will erode. The nation-state may go away. And the world benefits when people are able to compete with imagination rather than rank. Furthermore, the digital haves and have-nots will be less concerned with race or wealth and more concerned (if anything) with age. Developing nations will leapfrog the telecommunications infrastructures of the First World and become more wired (and wireless). We once moaned about the demographics of the world. But all of a sudden we must ask ourselves: Considering two countries with roughly the same population, Germany and Mexico, is it really so good that less than half of all Germans are under 40 and so bad that more than half of all Mexicans are under 20? Which of those nations will benefit first from "being digital"?
http://www.media.mit.edu/~nicholas/Wired/WIRED3-02.html (2 of 3) [28-4-2001 14:10:25]
NEGROPONTE
When you delegate the tasks of mowing your lawn, washing your car, or cleaning your suit, very little privacy is at stake. By contrast, when you hand over the management of your medical, legal, or financial affairs to another human, the performance of those tasks depends on your willingness to reveal very private and personal information. While oaths and laws may protect some confidentialities, there is no real regulatory shield against the leaking of intimate knowledge by human assistants. That is achieved solely through trust and mutual respect. In the digital world, such high regard and real confidence will be more difficult to accomplish, given the absence of actual or inferred values in a nonhuman system. In addition, a society of electronic agents will be able to communicate far more efficiently than a collection of human cooks, maids, chauffeurs, and butlers. Rumors become facts and travel at the speed of light. Since I constantly argue in articles and lectures that intelligent agents are the unequivocal future of computing, I'm always asked about privacy. However, the question is usually posed without a thorough appreciation of how serious an issue privacy is. As many of my speeches are delivered to senior executives in fancy resorts, I sometimes announce that I have arranged with the hotel management to receive a list of the movies watched by members of the (usually) male-dominated audience in their rooms the night before. As half the faces in the audience turn red, I admit I am joking. But no one is laughing. It's quite telling, but not that funny. All of a sudden, our smallest actions leave digital trails. For the time being, these "bit-prints" are isolated instances of very small parts of our lives. But over time, they will expand, overlap, and intercommunicate. Blockbuster, American Express, and your local telephone company can suddenly pool their bits in a few keystrokes and learn a great deal about you. This is just the beginning: each credit-card charge, each supermarket checkout, and each postal delivery can be added to the equation. Extrapolate this trend and, sooner or later, you are but an approximation of your own computer model. Does this bother you?
where I ate is not very interesting in comparison with either why I did so or any consequential information from my doing so (I liked the meal, my guest liked it, or neither of us liked it, but didn't want to admit it). The fact that I ate someplace is almost meaningless if the intent and the result are unknown. Purpose, intent, and subsequent feelings are far more important than the action or choice itself. I leave only a few digital crumbs for the direct-marketing community by revealing, for example, that I dined somewhere. The interesting data is held by the agent who made the reservation and later asks me how the evening went. Today, marketers reverse-engineer a consumer's choice to infer why a decision was made. Advertisers cluster such demographics to further guess whether I might be inclined to purchase one soap flake versus another. Tomorrow, this will change. We can opt to tell a computer agent what we want, when we want it, and, therefore, how to build a model of us - the collective reasoning of the past, present, and future (as far as we know it). Such agents could screen and filter information and anonymously let the digital marketplace know that we are looking for something. Two kinds of agents will exist in that scenario: one will stay at home (on your wrist, in your pocket, in your radio) and one will live on the Net, surfing on your behalf, carrying messages back and forth. To some degree, the homebodies can be hermetically sealed. They will read bit streams about products and services broadcast in abundance through wired and wireless channels. They will scoop off subsets of information of personal interest - an act as simple as grabbing a stock quote for you, and as complicated as determining your interest in a segment of a talk show. These agents will be "all ears." Messenger agents will be more complicated. They will function as we do today when they cruise the Net looking for interesting things and people. We are at a time in history when the Net is sufficiently small for some to believe that Mosaic and other browsing tools are the only future. They are not. Even today, the people surfing the Net are distinguished by having the time to do so. In the future, there will be almost as few humans browsing the Net as there are people using libraries today. Agents will be doing that for most of us. These Net-dwelling agents are the ones we need to worry about when it comes to privacy. They need to be tamper-proof, and we must find ways to preclude new forms of kidnapping (agent-napping). Sounds silly? Just wait until the courts begin to agonize over whether intelligent agents can testify against us.
Clipper ships
Security and privacy are deeply interwoven. The government is asking us to sail in an ocean of data, but it wants the ability to board our (Clippered) ships at any time. This has outraged the digerati and has become the object of enormous debate in WIRED and other places. I yawn. This is why.
Encryption is not limited to a single layer. If I want to send you a secret message, I promise you that I can, without any risk of anyone else being able to decode it. I simply place an additional layer of encryption on top of the data, using an unbreakable code. Such codes need not be the wizardry of mathematicians or the result of massive electronics, but can be simple but secure. To prove this, I have put 105 rows of 12 bits on the spine of my book, Being Digital. These bits contain a message. I bet that you will never be able to decode it. If classrooms of hotshot math students want to try, be my guest. WIRED magazine will honor you at great length. But don't spend too much time. It is not nearly as easy as the title of this story: James Bond. Next Issue: The Balance of Trade of Ideas [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.03 March 1995.]
NEGROPONTE
A December 19, 1990, front-page story in The New York Times, "MIT Deal with Japan Stirs Fear on Competition," accused the Media Lab of selling out to the Japanese. This news flash concerned the 1986 endowment from a Japanese industrialist who wished to provide, through an affiliation of five years, his alma mater with the seeds of basic research in new media. Believe me, you never want to be on the front page of The New York Times. I did not realize the degree to which such an appearance becomes news unto itself, as well as fodder for derivative stories. Newsday wrote an editorial based on that story less than a week later, called "Bye Bye High Tech," without checking any of the details. 1990 marked a peak in US scientific nationalism. American competitiveness was crumbling, the deficit was rising, and we were no longer Number One at everything. So for goodness sake, Nicholas, the editorials implored, don't tell the world how to do software, especially multimedia, something the United States pioneered and dominated. Well, it doesn't work that way, especially in an era when computing is no longer limited to the large institutions and nations that can afford it. What particularly irked me was the notion that ideas should be treated like automobile parts, without any understanding of where they come from or how they evolve. Ironically, this particular case of seemingly unpatriotic behavior related to the field of consumer electronics, where hardware had long been abandoned by American industry. Zenith, one of the most vocal critics at the time, doesn't even build TV sets in the United States, while Sony manufactures products in San Diego and Pittsburgh that are sold domestically as well as exported throughout the world. Odd, isn't it?
Princeton, New Jersey, where 100 people (95 percent of them US citizens) are engaged in fundamental science - "good" jobs. But now, that was bad too, maybe worse, because Japan would run away with our creative skills, getting the goose and the golden eggs. This is silly! New ideas come from differences. They come from having different perspectives and juxtaposing different theories. Incrementalism is innovation's worst enemy. New concepts and big steps forward, in a very real sense, come from left field, from a mixture of people, ideas, backgrounds, and cultures that normally are not mixed. For this reason, the global landscape is the most fertile ground for new ideas.
often published papers over a year after they were submitted. Now that ideas are shared almost instantly on the Net, it is even more important that Third World nations not be idea debtors - they should contribute to the scientific pool of human knowledge. It is too simple to excuse yourself from being an idea creditor because you lack industrial development. I have heard many people outside the United States tell me that they are too small, too young, or too poor to do "real" and long-term research. Instead, I am told, a developing nation can only draw from the inventory of ideas that comes from wealthy countries. Rubbish. In the digital world, there should not be debtor nations. To think you have nothing to offer is to reject the coming idea economy. In the new balance of trade of ideas, very small players can contribute very big ideas. Next Issue: Bill of Writes [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.04 April 1995.]
NEGROPONTE
A Bill of Writes
Your support of the digital age is deeply appreciated. As we move from a world of atoms to one of bits, we need leaders like you explaining that this revolution is a big one, maybe a 10.5 on the Richter scale of social change. Alvin and Heidi Toffler are dandy advisors; good for you for listening to them! The global information infrastructure needs a great deal of bipartisan cooperation, if only to help (read: force) other nations to deregulate and privatize their telecommunications. As you reach out across the world to evangelize the information age, people will listen. However, there is something specific you could do for the digital revolution in your own congressional backyard, a few hundred feet from the Capitol building - perhaps something that has never been considered. Congress controls the world's largest library - it receives more than 30,000 items per day. Of these, perhaps 8,000 are saved. The Library of Congress is, quite frankly, out of shelf space - even if one includes the overflow cave it shares with Harvard University. The library, your library, is a giant dumpster full of atoms. Books and other materials check in but almost never check out. But a wonderful largesse inspires this library: to read a book, one need not possess a special Library of Congress card, nor be a citizen of the United States. A person needs only to possess the desire to read. Well, actually, the individual has to be in Washington, DC, must be over 18 years old, and the librarians need to be able to find the thing requested. If mis-shelved, it might as well be lost forever. Few people ever use the library because, in reality, almost no one can. The library is almost everything but usable, everything but digital. There are more than 100 million items, and virtually none are available in digital form. Recently, the library stuck its toe out onto the Internet, touching millions through exhibits on the World Wide Web. Indeed, last summer it received its first-ever digital books (never mind that no procedures exist for receiving those bits, and essentially no apparatus exists to deal with them).
http://www.media.mit.edu/~nicholas/Wired/WIRED3-05.html (1 of 3) [28-4-2001 14:10:31]
As you know, almost every book published in the United States during the last 15 years has been produced digitally. Your next book will be, too, but I bet the atoms will still pile up in the depository - not the bits. This problem has not gone unnoticed. The National Science Foundation, the Advanced Research Projects Agency, and the Library of Congress are fully aware of the challenge to change those atoms into bits. The government has committed more than US$30 million over four years on digital-library research, including new means to convert, index, and navigate the wealth of bits in the global public library of tomorrow. Jefferson would be proud.
Copyright unbound
But Jefferson did not understand bits. He could not imagine that 1s and 0s would represent information and one day be read (and eventually understood) by machines. All of copyright law is essentially a Gutenberg artifact, bound to paper and construed in ignorance of the digital age. It will take us years to build digital libraries and longer to retool copyright law. Intellectual property is an extraordinarily complex subject. We are almost clueless about how to handle digital derivative works and digital fair use. In a digital world, the bits are endlessly copyable, infinitely malleable, and they never go out of print. Millions of people can simultaneously read any digital document - and they can also steal it. So, how do we protect digital information? Our own export laws (a separate issue you may want to consider) stymie encryption shamelessly. The information age is in a bit of a mess when it comes to understanding who may access what, when, how, and under the control of whom. But don't wait. You control the library that manages United States copyrights. Establish a Bill of Writes immediately. Force us to find solutions, so our children and grandchildren can benefit sooner, rather than later, from being digital.
woven infrastructure, the Library of Congress could be transformed from a depository into a "retrievatory." It would be closer to your desk and closer to the living-room couch than any of the thousands of public library buildings. A Library of Progress could be in the pockets of tomorrow's kids. Having a Bill of Writes now means that we can spend the next 20 to 50 years hammering out new digital-property laws and international agreements without stunting our future. More importantly, it means that publishers and authors can elect to make their bits available after they decide they have earned enough, and the bits will be ready to go. Without a Bill of Writes, our grandchildren will spend a lot of time digitizing the 70 million items that will be saved by your library over the next 30 years. The British and the French are building gigantic new buildings to hold more shelves for future atoms. Let our country be the first to write being digital into law. Sincerely, Your friends at the MIT Media Lab This column was co-authored with Professor Michael Hawley (mike@media.mit.edu), who holds appointments at MIT in Electrical Engineering and Computer Science, and Media Arts and Sciences. Next Issue: Digital Videodiscs, Either Format, Are the Wrong Product [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.05 May 1995.]
NEGROPONTE
Here we go again. Big guns and big stakes have been pulled into the fracas over two competing digital videodisc formats. Both of them will store studio-quality, full-length movies on a compact disc, but each format will do it in a slightly different way. And people are taking sides. Even big players from Hollywood are jumping in, as if they knew the difference between an angstrom and a micron. These mature and savvy businessfolk, as well as the press, refuse to understand that the issue is not video, but bits. Bits are bits. The moving image is only one data type. Surely, we cannot expect consumers to buy one machine or technology for video, another for audio, another for data, and yet another for multimedia. The screwball idea of owning a digital videodisc, which is nothing more than a movie player, is tantamount to digital obscenity. Certainly, we must increase the number of bits per square millimeter on CDs, but we also need to treat those bits as nothing more or less than what they are. We do not need to agree in advance on precise digital standards and formats, and we cannot speculate in advance on all conceivable uses. Instead, we need to agree on "meta-standards," a way of talking about talking about those bits. Sound like double talk? It isn't. Listen. Today, you can store roughly 5 billion bits on one side of a CD. If someone provided the means of increasing that by a factor of 10, it would be absolutely terrific. But I hope that whoever comes up with that scheme makes those bits as flexible as possible. They may be used for video and they may not be. Don't use the typical "standards committee" mind-set to remove the potentially rich new forms of information and entertainment that haven't even been thought of yet.
Unstandard standards
The physical world is unforgiving, so standards are desperately needed. Nonetheless, we cannot agree which side of the road to drive on. Europe has 20 power plugs. And once standards are set in the world of atoms, they're nearly impossible to undo.
But the world of bits is different, more forgiving. Why can't the entertainment industry understand this? A string of bits can contain information about itself: what it is, how to decode it, where to get related data. Surely there are multiple applications and options for future digital formats. The world is not just about movies, movies, and more movies. We must not lock the format of the bits into a single standard and call it video. By contrast, what we must get from the outset is the atoms: the diameter of the disc (the only variable that's not in dispute), the physical property of the small pits in the disc off which the laser bounces (as much as anything, an issue of choosing a wavelength of light that everyone can agree upon), and the thickness of the disc. If we don't agree on these, we are in very deep trouble. Although nobody is saying it, this is what the debate is really about, not video.
In fact, 100 years from now, people will find it odd that their ancestors used any moving parts to store bits! So please, Sony, Philips, Toshiba, Matsushita, and all your partners in Hollywood, don't give us a digital videodisc. Give us a new medium to store as many bits as possible. Learn from CDROM and let the market invent the new applications and new entertainment customers want. We'll be much better off. Next Issue: Affordable Computing [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.06 June 1995.]
NEGROPONTE
Affordable Computing
Andy makes my computer faster. Bill uses more of it. Andy makes my computer yet faster and Bill uses yet more of it. Andy makes more; Bill uses more. What do you and I get when Intel and Microsoft keep adding and taking? Almost nothing. My computer takes forever to start up. Loading my word processor is interminable. Each new release of an application is festooned with gratuitous options and an army of tiny icons the meanings of which I no longer remember. We've all heard that an application program exposes only the tip of its iceberg. Well, I normally use only the tip! My old Mac 512K went "boing" and was on. I still run Microsoft Word 4.0 (and would run Word 2.0 if I could) because it's fast and simple. The last six years of advances in personal computing have resulted in diminishing returns when one considers the performance we see. According to Moore's law (Gordon Moore is Andy Grove's partner and mentor and the cofounder of Intel), since I installed Word 4.0 six years ago, my system should now be running 16 times faster - not slower. My lament is, Why can't my system run at the same apparent speed it did six years ago and cost 16 times less? What's going on? Simple. First, the cost of personal computers is held artificially at US$1,500 (give or take $500), because the current market can bear this amount, and it provides a suitable profit margins for manufacturers. Second, software is growing far too complex (featuritis), so clean, simple systems are almost extinct. Third, it has been historically difficult for American technology companies to be in the commodity business and sell 10 million computers at $150 instead of 1 million at $1,500. Software and hardware companies can get away with the $1,500 price tag when their primary customers are other businesses. But now that the home is the fastest-growing market, they've got to think about you as the customer - and this is an entirely different game. Don't let anyone tell you that a $1,500 price tag is endemic because computers are just plain expensive. Do you need proof that they can be cheap(er)?
Nintendo has released a 20-MHz, 32-bit RISC machine called the Virtual Boy (what an awful name) that includes extraordinary 3-D graphics and stereo sound; two built-in displays with four levels of gray; and a novel, two-hand game controller. Its retail price is $199, and it comes with one game cartridge. This product arrives at a time when the yen is below 85 to the dollar. Nintendo is not losing money on the razor to sell the blades. Why not take that kind of power and build it into a more general-purpose - but stripped-down machine, with Netscape or Mosaic built in, that everyone can afford? Congress worries about the information-rich versus the information-poor, but most of its members probably don't realize that computers can cost less than bicycles.
have to build a small RF receiver so it could load these machines with personalized commercials but without requiring the user to log in or pay for connection time. This could be done, for example, through terrestrial broadcasts using the likes of Mobile Telecommunications's SkyTel. There are several ways to implement this and more ways to make the business model attractive to vendors (and the computer more or less free). Advertisers would pay to gain access to what turns out to be about 2,000 acres of advertising space (changeable per square inch, per day, or per hour). That money could subsidize the cost of the computer and even pay for you to use it. Will this really work? Yes. Of course, many details need to be resolved. My point is not this specific example, but the need for some creative thinking about how to make and price PCs. I am no great fan of advertising, but it does represent a quarter-of-a-trillion-dollar industry, and there must be a way to use its size to make computing affordable to all Americans. So, step one is to get Andy and Bill to stop scratching each other's backs, and step two is to find new business models to make low-cost PCs available to consumers through the intelligent use of advertising. Come on, boys. Next Issue: PC Outboxes TV [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.07 July 1995.]
WIRED 3.08 - Bit by Bit, PCs Are Becoming TVs. Or Is It the Other Way Around?
NEGROPONTE
Message: 26 Bit by Bit, PCs Are Date: 8.1.95 Becoming TVs. Or From: <nicholas@media.mit.edu> Is It the Other To: <lr@wired.com> Way Around? Subject:
Only a year ago, people argued over which one would serve as the port of entry for the I-way, and which would be the information and entertainment appliance for the home. Well, the argument is over. The answer is the PC. George Gilder is right. There is life after television, and it's all about the PC. But don't confuse television with television sets. I'm not suggesting that relaxation is a thing of the past. To understand the rise of the PC and the demise of the TV set is to consider the role of the TV in the everyday life of Americans - as well as the degree to which that role can be played out more fully by other means in the digital age. Is It Location, Location, Location? Although Vincent Astor talked about real estate in a larger sense, consider the microlandscape of your own home. Whether in a living room, a library, or a bedroom, a television set normally includes a large screen. The viewer sits beyond arm's reach, often with others, on a sofa. A PC, on the other hand, frequently has a smaller screen and is rarely located in the living room. The user sits upright in a chair, at a desk or table, with his or her nose roughly 18 inches from the screen. These particular customs stem not only from today's versions of these appliances but also from the fact that human interaction feels more meaningful when we are next to each other, not tethered by electrons. PCs, however, will inevitably become more bedworthy. And television sets will grow to resemble keyboardless computers, installed more like Sheetrock than furniture. The difference is not really social. Some still consider the experience of humans watching TV side by side to be more social than the interaction of the 10 million Americans online today. Yet we know that Americans engage more in "community" than "information retrieval" while online. (On America Online, according to its CEO Steve Case, the percentage ratio is 60:40.)
WIRED 3.08 - Bit by Bit, PCs Are Becoming TVs. Or Is It the Other Way Around?
catch-as-catch-can. By contrast, the PC receives its bits because it (or you) asks for them explicitly (or implicitly). That's the difference. In both cases, the TV and the PC are bit processors, accumulating bits as they come, or reaching for them from afar. Sometimes, you'll want to pull on bits; other times, you'll want them pushed at you - whether you're in the bedroom or the living room, sitting or lying, with someone or alone. For a while, computer designers were adding more and more video to their computers; meanwhile, TV manufacturers were adding more and more computing to their TVs. Modern TVs have chips running megaMIPs, and Intel processes VCR-quality TV (real-time, full-screen TV) on its current Pentium. Yet companies that made both TVs and PCs found that the respective divisions didn't even talk to each other: one group was addressing the "consumer" market, the other the "computer-user" market. Any knucklehead who believes such a distinction exists today doesn't deserve gainful employment. They are the same market.
WIRED 3.08 - Bit by Bit, PCs Are Becoming TVs. Or Is It the Other Way Around?
than those currently in use. So far, this process of pushing bits at people has been in real time only. When people talk about 500-channel TV, they mean 500 parallel streams. They don't mean one program after another, broadcast in one five-hundredth of real time. You don't download TV, you join an ongoing program. That's why commercial TV stations and cable operators are delivering as many eyeballs as possible to advertisers - so they can afford to bring the programs to the people in the first place. When you buy a can of Coke, you are paying a few cents for the drink and the can, and nanodollars for television advertising. No doubt, the means of financing the bits will look strange to our great-great grandchildren. But for today, it's what makes television work. Eventually, we'll find new economic models, probably based on advertising and transactions. Television will become more and more digital, no matter what. These are givens. So it makes no sense to think of the TV and the PC as anything but one and the same. It's time TV manufacturers invested in the future, not the past - by making PCs, not TVs. Next Issue: PC Outboxes TV [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.08 August 1995.]
NEGROPONTE
Get a Life?
Any significant social phenomenon creates a backlash. The Net is no exception. It is odd, however, that the loudest complaints are shouts of "Get a life!" - suggesting that online living will dehumanize us, insulate us, and create a world of people who won't smell flowers, watch sunsets, or engage in face-to-face experiences. Out of this backlash comes a warning to parents that their children will "cocoon" and metamorphose into social invalids. Experience tells us the opposite. So far, evidence gathered by those using the Net as a teaching tool indicates that kids who go online gain social skills rather than lose them. Since the distance between Athens, Georgia, and Athens, Greece, is just a mouse click away, children attain a new kind of worldliness. Young people on the Net today will inevitably experience some of the sophistication of Europe. In earlier days, only children from elite families could afford to interact with European culture during their summer vacations abroad. I know that visiting Web pages in Italy or interacting with Italians via e-mail isn't the same as ducking the pigeons or listening to music in Piazza San Marco - but it sure beats never going there at all. Take all the books in the world, and they won't offer the real-time global experience a kid can get on the Net: here a child becomes the driver of the intellectual vehicle, not the passenger. Mitch Resnick of the MIT Media Lab recently told me of an autistic boy who has great difficulty interacting with people, often giving inappropriate visual cues (like strange facial expressions) and so forth. But this child has thrived on the Net. When he types, he gains control and becomes articulate. He's an active participant in chat rooms and newsgroups. He has developed strong online friendships, which have given him greater confidence in face-to-face situations. It's an extreme case, but isn't it odd how parents grieve if their child spends six hours a day on the Net but delight if those same hours are spent reading books? With the exception of sleep, doing anything six hours a day, every day, is not good for a child.
Anyware
Adults on the Net enjoy even greater opportunity, as more people discover they can work from almost anywhere. Granted, if you make pizzas you need to be close to the dough; if you're a surgeon you must be close to your patients (at least for the next two decades). But if your trade involves bits (not atoms), you probably don't need to be anywhere specific - at least most of the time. In fact, it might be beneficial all-around if you were in the Caribbean or Mediterranean then your company wouldn't have to tie up capital in expensive downtown real estate. Certain early users of the Net (bless them!) are now whining about its vulgarization, warning people of its hazards as if it were a cigarette. If only these whiners were more honest, they'd admit that it was they who didn't have much of a life and found solace on the Net, they who woke up one day with midlife crises and discovered there was more to living than what was waiting in their e-mail boxes. So, what took you guys so long? Of course there's more to life than e-mail, but don't project your empty existence onto others and suggest "being digital" is a form of virtual leprosy for which total abstinence is the only immunization. My own lifestyle is totally enhanced by being online. I've been a compulsive e-mail user for more than 25 years; more often than not, it's allowed me to spend more time in scenic places with interesting people. Which would you prefer: two weeks' vacation totally offline or four to six weeks online? This doesn't work for all professions, but it is a growing trend among so-called "knowledge workers." Once, only the likes of Rupert Murdoch or Aga Khan could cut deals from their satellite-laden luxury yachts off the coast of Sardinia. Now all sorts of people from Tahoe to Telluride can work from the back seat of a Winnebago if they wish.
B-rated meetings
I don't know the statistics, but I'm willing to guess that the executives of corporate America spend 70 to 80 percent of their time in meetings. I do know that most of those meetings, often a canonical one hour long, are 70 to 80 percent posturing and leveling (bringing the others up to speed on a common subject). The posturing is gratuitous, and the leveling is better done elsewhere - online, for example. This alone would enhance US productivity far more than any trade agreement. I am constantly astonished by just how offline corporate America is. Wouldn't you expect executives at computer and communications companies to be active online? Even household names of the high-tech industry are offline human beings, sometimes more so than execs in extremely low-tech fields. I guess this is a corollary to the shoemaker's children having no shoes. Being online not only makes the inevitable face-to-face meetings so much easier - it allows you to look outward. Generally, large companies are so inwardly directed that staff memorandums
http://www.media.mit.edu/~nicholas/Wired/WIRED3-09.html (2 of 3) [28-4-2001 14:10:40]
about growing bureaucracy get more attention than the dwindling competitive advantage of being big in the first place. David, who has a life, needn't use a slingshot. Goliath, who doesn't, is too busy reading office memos.
Luddites' paradise
In the mid-1700s, mechanical looms and other machines forced cottage industries out of business. Many people lost the opportunity to be their own bosses and to enjoy the profits of hard work. I'm sure I would have been a Luddite under those conditions. But the current sweep of digital living is doing exactly the opposite. Parents of young children find exciting self-employment from home. The "virtual corporation" is an opportunity for tiny companies (with employees spread across the world) to work together in a global market and set up base wherever they choose. If you don't like centralist thinking, big companies, or job automation, what better place to go than the Net? Work for yourself and get a life. Next Issue: Year 2020, the Fiber-Coax Legacy [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.09 September 1995.]
NEGROPONTE
In 2020, people will look back and be mighty annoyed by our profligate insistence on wiring a fiber-coax hybrid to the home rather than swallowing the cost of an all-fiber solution. They'll ask, "Why didn't our parents and grandparents plan more effectively for the future?" As far as the American home is concerned, the phone companies have the right architecture (switched services), and the cable companies have the right bandwidth (broadband services). We need the union of these: switched broadband services. But how do we get from here to there? No one will deny that the long-term solution is to install fiber all the way, but the benefits seem diffuse and the costs acute. In the eyes of the telcos and cable companies, the question is financial - and since the near-term balance sheets don't add up, fiber is not being laid all the way. One way around this problem is to circumvent the private market and let a telecommunications monopoly build the infrastructure, which is exactly what Telecom Italia is doing. It has declared fiber to the home as its goal and will swallow the initial cost meeting this goal in the name of national interest. This is one of the few benefits of a government-owned monopoly: Italy will have a far better multimedia telecommunications system than the United States by 2000.
needed. Part is the fact that TVs will need an adapter. But none of that $400, mind you, is the cost of the fiber, which, these days, is more reliable and cheaper than copper, even including the connectors. So, the cost difference today between hybrid and pure fiber is $400 per household. That estimate was $1,000 two years ago and will probably be $200 in a year or two. If we base our decision not to run fiber on a number that is dropping so rapidly, have we really made the right choice? If what stands between me and fiber to my home is $400, I'll raise my hand and pay my share. I bet others would too. Maybe, in staring so hard at the bottom line, we are failing to remember what's really going on here.
Joe six-packets
http://www.media.mit.edu/~nicholas/Wired/WIRED3-10.html (2 of 3) [28-4-2001 14:10:42]
The use of bandwidth is generational. As soon as kids find the Net alternative, they spend less time watching TV. The number of Web sites is doubling every 53 days. These will increase, not decrease, and provide the basis for a huge nano-economy when we crack the nut of e-cash. Andy Lippman, associate director of the Media Lab, has a nice way of putting it. When people take him to task about the real need for symmetry in future communications systems, he notes that it's already built into our current ones, reserved for the head ends, not the customers. But more and more people will want to be their own head ends. Our wiring and our consumption of new media are deeply interwoven. What we see in the current fiber-coax strategies is fiscal timidity, justified by the usage patterns of an old-line broadcast model, not the Net. There is a way to do it right, and that is to provide fiber all the way to the home. Instead of wasting time justifying half-baked ideas, let's find ways to finance the solution. The Italians already have. Next Issue: Being 10 [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1995, WIRED Ventures Ltd. All Rights Reserved. Issue 3.10 October 1995.]
NEGROPONTE
Do you realize that in France the first six letters of a keyboard don't spell QWERTY but AZERTY? In March of this year, when French Culture Minister Jacques Toubon announced the decision to rid the French language of foreign (read: English) words by making it illegal (a US$3,500 fine) to use such words in company names and slogans, I was sadly reminded of a 1972 job I conducted for the Shah of Iran. My task was to provide a color word processor - the Shah wished to see Farsi texts in which color depicted the age of a word. His desire was to understand his language rather than purge it. I suppose, by contrast, Minister "James Allgood" plans to change all stop signs to "Arret." Given this backdrop of nonsense at the highest level of government, is it much of a surprise that Europe is such a weak player in the computer and telecommunications industry? Of all fields, this industry is truly global and borderless. And as with air-traffic control, English is the lingua franca. Bits don't wait in customs; they flow freely across borders. Just try stopping them. WIRED's first World Wide Web page, for example, was developed in Singapore - a place whose support for freedom of the press is dubious, a place William Gibson referred to as "Disneyland with the Death Penalty" (WIRED 1.4, page 51). Many artistic, industrial, and intellectual movements are driven by distinctly national and ethnic forces. The digital revolution is not one of them. Its ethos is generational and young. The demographics of computing are much closer to rock music than theater. French rock star Johnny Halliday is allowed to sing in English, after all. If Europe wishes to remain at the vanguard of culture, it must step off its high horse and look more imaginatively at the future. Maybe it is time to discontinue ministries of culture.
move into the digital generation? Because like most places in Europe, France is a top-down society, where a job is a place one occupies and protects. It is not a process of building, creating, and dreaming. Incentives for young entrepreneurs are almost nonexistent. Compared to their US counterparts, French young people are just not taken seriously. Double-breasted wisdom reduces risk. A generally aging population enjoys stability and places confidence most easily in those who have had considerable and tested experience. Ballet dancers, downhill skiers, and mathematicians may peak at thirtysomething; CEOs and national leaders, by contrast, are groomed by the passage of time. The word "leader" presumes age, despite Alexander the Great, who at his death was six years younger than Bill Gates is today. I happened to be in Paris in May 1968, when students my age took to the streets. I asked myself, Why are we, in the United States, so complacent and docile? Fourteen years later, I found myself working directly for the Elysee Palace. And, guess what? Many of the people orbiting Mitterand were the same people who had hurled paving stones through the tear gas in 1968.
Venture Void
When people ask me why so many new ideas in my field come from the United States, I talk about the respect we give to young people and to our heterogeneous culture. The real difference is our venture capital system, which is almost totally absent in Japan and Europe where accountants intermix venture money with large leveraged buyouts. Therefore, the statistics do not show the real difference between them and the United States, where venture capital firms spent US$3.07 billion in 1993. The result is many fewer young European and Japanese companies that combine the genius of the hacker with the drive of the entrepreneur. This is particularly important when the entry cost is nontrivial and distribution determines the difference between success and failure. New ideas are not just about capital. They are also about risk and the willingness to take it. The flip side of venture capital is the risk young people are frequently willing to take with something even bigger. I have seen marriages fail, people work themselves to death (literally), and an obsession for success that overshadows every other human dimension. Good or bad, such obsessive commitment is a key part of many new ventures. The currency of achievement is often not money but personal fulfillment and passion, something too easily thwarted by the bureaucracies of a homogeneous, old society.
playfulness and an infrequent convergence of intellectual cultures, which is where computer ideas have traditionally come from. One of MIT's most significant computer forces during the early '60s came from its model railroad club. Another came from the Science Fiction Society. Multimedia has disparate roots in storytelling, drama, music, and cinematography. The point is that new ideas do not necessarily live within the borders of existing intellectual domains. In fact, they are most often at the edges and in curious intersections. This means that institutions like universities and PTTs have to embrace some very anti-establishment ideas. Europe's dominantly state-run universities and PTTs just don't do that very well. They run a close first and second for knocking down new ideas. The European Union is now faced with a global information infrastructure in which it just may not be a playeur. Next Issue: Human Interface: Sensor Deprived [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.09 September 1994.]
NEGROPONTE
Most equipment and network providers believe that entertainment will finance the superhighway and that video-on-demand, VOD, is the driving force or killer app of our wired future. I do not disagree with this view, but I marvel at the short-sighted, incomplete, and outright misleading conclusion drawn from it. The case for VOD goes as follows: Let's say a videocassette-rental store has a selection of 2,000 tapes. Suppose it finds that 5 percent of those tapes result in 90 percent of all rentals. Most likely, a good portion of that 5 percent would be new releases and would represent an even larger proportion of the store's rentals if the available number of copies were larger. Videocassette-rental stores will go out of business within a decade. (It makes no sense to ship atoms when you can ship bits.) The easy conclusion is that the way to build an electronic Blockbuster is to offer only those top 5 percent, those primarily new releases. Not only would this be convenient, it would provide tangible and convincing evidence for what some still consider an experiment. It would take too much time and money to digitize all 29,000 movies made in America by 1990. It would take even more time to digitize the 30,000 TV programs stored in the Museum of Television & Radio in New York, and I'm not even considering the movies made in Europe, the tens of thousands from India, or the 12,000 hours per year of soaps made in Mexico by Televisa. The question remains: Do most of us really want to see just that top 5 percent? Or, is this herd phenomenon driven by the old technologies of distribution?
AAATV
Some of the world's senior cellular telephone executives recite this jingle: "anything, anywhere, anytime." These three A's are a sign of being modern and being wired (and wireless, actually). When I hear this mantra I try not to choke, because my goal is to have "nothing, nowhere, never" unless it is timely, important, amusing, relevant, or capable of engaging my imagination. AAA stinks as a paradigm for human communication -- agents are much better. But AAA is a beautiful way to think about TV.
We hear a great deal of talk about 1,000 channels of TV. Allow me to point out that, even without satellite, more than 1,000 programs are delivered to your home each day! Admittedly, they are sent at all -- and odd -- hours. The 150-plus channels of TV listed in Satellite TV Week add another 2,700 or more programs available per day. If your TV could store every program transmitted, you would already have five times the selectivity offered in the superhighway's broad-brush style of thinking. But, instead of keeping them all, have your agent-TV grab the one or two in which you might have interest, for you to see anywhere and anytime. Let AAATV expand to a global infrastructure: the quantitative and qualitative changes become interesting. Some people might listen to French television to perfect their French, others might follow Swiss Cable's Channel 11 to see unedited German nudity (at 5 p.m. New York time), and the 2 million Greek Americans might catch any one of the three national or seven regional channels of Greece. The British devote 75 hours per year to the coverage of chess championships and the French commit 80 hours of broadcasting to the Tour de France. Surely American chess and bicycle enthusiasts would enjoy access to these events -- anytime, anywhere. My point is simple: the broadcast model is what is failing. "On-demand" is a much bigger concept than not-walking-out-in-the-rain or not-forgetting-a-rented-cassette-under-the-sofa-fora-month. It's consumer pull versus media push, my time -- the receiver's time -- versus the transmitter's time.
Rethreaded TV
Beyond recalling an existing movie or playing any of today's (or yesterday's) TV around the world (roughly 15,000 concurrent channels), VOD could provide a new life for documentary films, even the dreaded "infomercial." The hairs of documentary filmmakers will stand on end when they hear this. But it is possible to have TV agents edit movies on the fly, much like a professor assembling an anthology using chapters from different books. If I were contemplating a visit to the southern coast of Turkey, I might not find a documentary on Bodrum, but I could find sections from movies about wooden-ship building, nighttime fishing, underwater antiquities, and Oriental carpets. These all could be woven together to suit my purpose. The result would not be an "A+" in Introductory Filmmaking. But one doesn't expect an anthology to be Shakespeare. In fact, one judges production values through the eyes of the beholder. It would help to thread chunks made by great organizations such as National Geographic, PBS, or BBC, but the result would have meaning only to me.
Cottage Television
Finally, the 3.1 million camcorders sold in the US last year cannot be ignored. If the broadcast model is colliding with the Internet model, as I firmly believe it is, then each person can be an unlicensed TV station. Yes, Mr. Vice President, this is what you said in LA. Even before we understand how the Internet will function as a commercial enterprise, we must reckon with
http://www.media.mit.edu/~nicholas/Wired/WIRED2-08.html (2 of 3) [28-4-2001 14:10:46]
uncountable hours of video. I am not suggesting we consider every home movie to be a prime-time experience. What I am saying is that we can nowthink of TV as a great deal more than high-production-value mass media when the content strikes home, so to speak. Most telecommunications executives understand the need for broadband into the home. (Recall, broadband, for me, is 1.5 to 6 Mbits per household member, not Gbits). What they cannot fathom is the need for a back channel of similar capacity. The video back channel is already accepted in teleconferencing and is a particularly fashionable medium in divorced families for the parent who does not have custody of the children. That's live video. Consider "dead" video. In the near future, individuals will be able to run video servers in the same way that 57,000 Americans run computer bulletin boards today. That's a television landscape of the future which looks like the Internet. Point to multipoint may swing dramatically toward multipoint to multipoint, on my time. Next Issue: Why Europe is Unwired (Part One) [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.08 August 1994.]
NEGROPONTE
I'm always amazed when I read about how badly young Americans are educated, not because such statements are necessarily untrue, but because most authors and critics go on to compare our children with those of France, Korea, or Japan, whose brains have been stuffed with thousands of facts. Most American children do not know the difference between the Baltics and the Balkans, or who the Visigoths were, or when Louis XIV lived. So what? I'll bet you don't know that Reno is west of Los Angeles. Let me point out the heavy price paid in those countries for requiring young minds to master this apparent font of knowledge. Children in Japan are more or less dead on arrival when they enter the university system. Over the next four years they'll feel like marathon runners asked to go rock climbing at the finish line. Worse, those young people didn't learn a thing about learning and, for the most part, have had the love of it whipped out of them. In the 1960s, most pioneers in computers and education advocated a crummy drill-and-practice approach, using computers on a one-on-one basis, in a self-paced fashion, to teach those same God-awful facts more effectively. Now, with multimedia, we are faced with a number of closet drill-and-practice believers, who think they can colonize the pizazz of a Sega game to squirt a bit more information into the thick heads of children.
the wheel and finding out for yourself. Until the computer, the tools and toys for these experiences were limited, special-purpose apparatuses, frequently administered with extreme control and regimentation (my excuse for not learning chemistry). The computer changed this radically. All of a sudden, learning by doing has become the standard rather than the exception. Since computer simulation of just about anything is now possible, one need not learn about a frog by dissecting it. Instead, children can be asked to design frogs, to build an animal with froglike behavior, to modify that behavior, to simulate the muscles, to play with the frog.
range of cognitive and learning styles. In fact, many children said to be learning disabled flourish here. Perhaps we have been more "teaching disabled" than "learning disabled." Even without a robust theory of why building things helps us learn, why designing frogs may be better than dissecting them, we can rest assured that the constructivist tools will grab an increasing piece of the market for learning technology. This is happening precisely at a time when more and more people are taking the publishing model seriously, perhaps too seriously, and expanding it to multimedia. There may be a surprising end run by more design-based software and networking technology. Current work with Lego at the Media Lab includes a computer-in-a-brick prototype, which demonstrates a further degree of flexibility and opportunity for constructivism. It includes interbrick communications and opportunities to explore parallel processing in ways that none of us could. Kids using this today will learn physical and logical principles you and I learned in college. Imagine a Lego set in the year 2000, where each brick says: "Intel inside." Next Issue: Prime Time is My Time [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.07 July 1994.]
NEGROPONTE
Message: 12 Date: 6.1.94 Less Is More: From: <nicholas@media.mit.edu> Interface Agents as To: <lr@wired.com> Digital Butlers Subject:
Al Gore need not be right or wrong in his conception of details. It almost doesn't matter whether he calls it an information superhighway, an infobahn, or a National Information Infrastructure. What matters is his personal and sincere interest in computers and communications and the fact that his enthusiasm has raised our popular consciousness of telecommunications. The media cacophony over phenomena like the Internet fosters an open architecture and emphasizes access by all Americans. The clamor, however, has perpetuated a tacit assumption that more bandwidth is an innate, a priori, and (almost) constitutional good. The right to 1,000 channels of TV! Continental Cable, the local cable company in Cambridge, Massachusetts, now offers Internet access at 500,000 bits per second. With that service, The Wall Street Journal takes sixteen seconds to transmit in its entirety (as structured data mostly, not fax, please!). When fiber reaches the home, by some estimates, we will have access to as much as 100 billion bits per second. Hmmm. Most people generally make a false assumption that more bits are better. More is more. In truth, we want fewer bits, not more. Our needs fall along a spectrum. Consider a newspaper: Our requirements are very different on Monday morning from what they were on Sunday afternoon. At 7 a.m. on a workday, you are less likely to be interested in browsing stories. Serendipity just does not play a key role then. In fact, you would most likely be willing to pay The New York Times US$10 for ten pages vs. $1 for 100 pages. If you could, you would opt for a heavy dose of personalized news. It's simple: Just because bandwidth exists, don't squirt more bits at me. What I really need is intelligence in the network and in my receiver to filter and extract relevant information from a body of information that is orders of magnitude larger than anything I can digest. To achieve this we use a technique known as "interface agents." Imagine a future where your interface agent can read every newspaper and catch every broadcast on the planet, and then, from this, construct a personalized summary. Wouldn't that be more interesting than pumping more and more bits into your home?
Guides
Why do people pay 85 cents to find out whether their one daily lottery ticket won? TV Guide has been known to make larger profits than all four networks combined. What do these things tell you? It says that the value of information about information can be greater than the value of the information itself. From that and other similar observations (American Airlines makes more from its reservation system than from carrying passengers) I am willing to project an enormous new industry based on a service that helps navigate through massive amounts of data. When we think of new information delivery, we tend to cramp our thoughts with concepts like "info grazing" and "channel surfing." These concepts just do not scale. With 1,000 channels, if you surf from station to station, dwelling only three seconds per channel, it will take almost an hour to scan them all. A program would be over long before you could decide whether it is the most interesting. I am fond of asking people how they select a theatrical, box-office movie. Some pretend they read reviews. I hasten to interject my own solution - which is to ask my sister-in-law - and people quickly admit that they have an equivalent. What we want to build into these systems is a sisterin-law, an interface agent which is both an expert on movies and an expert on you.
credible third party, perhaps a local telephone company, perhaps a long distance company like AT&T, perhaps a new venture altogether. What we should be looking for is an entity which is able and willing to keep our identities confidential while at the same time passing along newsworthy advertising and information. Such services will only work with a high degree of machine learning. While it is important to postulate such learning, how does this relate to human learning? Next Issue: Learning vs. Teaching [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.06 June 1994.]
NEGROPONTE
There is no speed limit on the electronic highway. Change, whether technological, regulatory, or in the area of new services, is happening faster than I can believe - and I think of myself as an extremist when it comes to predicting and initiating change. To me, the current state of affairs is like driving on the autobahn at 160 kph. Invariably, just as I realize the speed I'm going, zzzwoom, a Mercedes passes, then another, and another. Yikes, they must be driving at 200 kph or 220 kph. Such is life in the fast lane of the infobahn, but nowhere more so than on Wall Street. Bob Lucky, Bellcore's vice president for applied research and a highly acclaimed author and engineer, noted recently (in "Looking Ahead at Telecommunications," Bellcore Exchange, November 1993) that he no longer keeps up to date technically by reading scholarly publications; instead he reads The Wall Street Journal. As usual, Bob is right. The reason for this phenomenon is simple: The future of the computer and communications industries will be driven by applications, not by scientific breakthroughs like the transistor, microprocessor, or optical fiber. The problems now stem not from basic material sciences but from basic human needs. To focus on the future of the "bit" industry, there is no better place to set one's tripod than on the entrepreneurial, business, and regulatory landscape of the United States, with one leg each in the New York, American, and NASDAQ exchanges.
movie bits, even hockey-playing bits). The reason is simple: As everything becomes digital, the bits commingle (that's called multimedia), and they leak into the interstices of humanity, previously unreachable by the delivery of physical matter (that's called new markets). If your company makes only one kind of bit, you are not in very good shape for the future; both Sumner Redstone and Barry Diller know that. The Paramount story is about bits, not egos. All of a sudden, companies see the opportunity not only to resell their archived bits but to mix and match, to augment, and to personalize information and entertainment. The more a bit can be put to use or recycled, the more it is worth. In this regard, a Mickey Mouse bit is probably worth a lot more than a Star Trek bit. My goodness, Mike Eisner's bits even come in lollipop form. More interestingly, his guaranteed audience is refueled at a rate that exceeds 100 million births each year. I am certainly betting on Disney's bits.
Bit Transportation
I cannot think of a worse business to be in than the transport of bits - worse than the airline business with its fare wars. Consider, the business is regulated to such a degree that NYNEX must put telephone booths (which last all of 48 hours) in the darkest corners of Brooklyn, while its unregulated competition will put its booths on Fifth and Park avenues. That's only the beginning: Now the digital era emerges, and bits need to be priced differently. Surely none of us is going to pay the same for a movie bit (there are about 10 billion of them in a very highly compressed digital movie) as we will for a conversation bit (there are only 100 million of them in a highly data-compressed, two-hour conversation). Consider your mother-inlaw's return home from the hospital and her need for an open line, 24 hours per day, just to monitor a half-dozen bits per hour. Try figuring out that business model! Or what about the 12year-old kid doing his homework, who should have access to WIRED's content for nothing while Wall Street analysts should pay a fair price. It is not difficult to speculate. If management limits a telecommunications company's long-term strategy to carrying bits, it will not be acting in the shareholders' interest. Owning the bits or rights to the bits, or adding significant value to the bits, must be a part of the equation for telecommunications success. Otherwise, there will be no place to add value, and telco operators will be stuck with a service fast becoming a commodity, the price of which will go down further and further.
Computer companies have been positioning themselves as software companies for years. By software they usually meant tools, sometimes end-user systems. A change is afoot. And, no, I'm not going to tell you about the multimedia industry, again. What I am talking about is information about information, and the processes by which we filter the onslaught of bits. The computer industry's blades may not only be modeled after Bambi or Tetris. Instead, I see a huge market in the agent business, modeled more after the added value of an English butler or the Librarian of Congress. Yes, making and owning the bits is certainly better than simply carrying, storing, or churning them. But there may be another bit business: understanding the bits. So far, in the theater of Wall Street, the personal information filter business has only played a bit part. I assure you that it will be tomorrow's lead role on the stage of success. Next Issue: Digital Butlers: Interface Agents [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.05 May 1994.]
NEGROPONTE
People are startled when I criticize the fax machine and accuse it of retarding the ascension of computer-readable information. I truly believe that the fax machine has been a serious blemish on the computer landscape, the ramifications of which we will feel all too soon. But the typical response to such a statement is: "What do you mean? The advent of the fax has been extremely positive." The fax is a step backward because it does not contain "structured data," but rather an image of text that is no more computer-readable than this page of Wired (unless you are reading it on America Online). Even though the fax is delivered as bits before it is rendered as an image on paper, those bits have no symbolic value. If, 25 years ago, we (that is, some of us in the scientific community) could have been overheard predicting the percentage of text that would be computer-readable by the turn of the millennium, the percentages would have been as high as 90 or 95 percent. But then, boom, around 1980 the previous steady growth in computer-readability took a nose-dive because of the fax. This magazine page, without my picture, takes about 20 seconds to send by fax. At 9,600 bps, this represents approximately 200,000 bits of information. On the other hand, using electronic mail, only a quarter of those bits are necessary: the ASCII and some control characters. In other words, if you charge me per bit to transmit this page, not only is e-mail better, because it is computer-readable, but it will cost less than a quarter of the fax price. Who's fooling whom and why did this happen?
A Japanese Legacy
To understand the fax, one must understand Japan, Kanji, and iconic "alphabets" (full Kanji, for example, has over 60,000 symbols). As recently as ten years ago, Japanese business was not conducted via letter but by voice, usually face to face. Few businessmen had secretaries, and documents were written, often painstakingly, by hand. The equivalent of a typewriter looked more like a typesetting machine,
http://www.media.mit.edu/~nicholas/Wired/WIRED2-04.html (1 of 3) [28-4-2001 14:10:55]
with an electromechanical arm positioned over a dense template of choices to produce a single Kanji symbol. It goes without saying that a string of 8 bits, like ASCII, was insufficient to represent the full set of choices. The pictographic nature of Kanji made the fax a natural. Since little Japanese was then (and is now) in computer-readable form, there was (and is) no comparable loss. In a very real sense, fax standardization, lead by Japanese companies, gave great short-term favor to their written language but resulted in great long-term harm to ours. I have heard estimates that as much as 70 percent of telephone traffic across the Pacific today is fax, not voice. Like the answering machine, the fax is a blessing to the phone companies.
Football as a Model
Television is "moving" fax. The Economist estimates that less than 1 percent of the world's information is in digital form. This estimate certainly appears accurate when considering photographs, film, and video, all of which require so many bits. However, the statistic does not reflect the fact that when many of those media are digital, they are neither more nor less computer-understandable than they are today.
Consider your audio CD (audio fax, if you will), which is indeed digital, but not structured, data. So far, the closest example to audio ASCII is musical notation as we know it in scores. A football game, recorded and transmitted via digital or analog video, has no structure. Each frame functions like a fax. The alternative is to capture the game as a model, with each player represented as a complex mathematical marionette, whose kinematics can be derived by a sensor and transmitted to your receiver (4-Dimensional ASCII). At the receiver, not in the camera, the representation is "flattened" onto the screen or displayed holographically. Not only can the game be seen from any perspective, but the computer can reconstruct plays as diagrams, compare the tactics of one play with a previous one, show it from the perspective of the quarterback, or make some canny predictions. My point, therefore, is more general than a flame at fax machines. It is a call for greater attention to the structure and content of bit streams, versus the wholesale digitizing of data. Being digital is not enough. When American Express began storing my credit card slips as images, my heart sank. They seemingly threw out the content of the transaction and saved only a picture of my payment. Similarly, I just don't believe that insurance adjustment forms need to be stored as pictures. We need the computer vendors to stop selling imaging systems to information providers. These are no more inspired or helpful than microfilm. It is time to buckle down and attack the hard problem of page, document, picture, and video description languages that allow for all our data streams to be in symbolic, not facsimile form. Otherwise, we are all being sold a "bit" of goods. Next Issue: Bit By Bit on Wall Street [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.04 April 1994.]
NEGROPONTE
The scene comes from an MIT proposal on human-computer interaction submitted to ARPA twenty years ago by Chris Herot (now at Lotus), Joe Markowitz (now at the CIA) and me. It made two important points: Speech is interactive, and meaning - between people who know each other well - can be expressed in shorthand language that probably would be meaningless to others. It may be difficult for the reader to believe the degree to which speech I/O has been studied separately in the past. Like Benedictine monks, each research team developed and guarded a special voice input or output technique, rarely fussing over the conversational brew. Understanding speech as a component of a conversation is very different to understanding it as a monologue.
Ah Ha!
I have told the following story a million times (admittedly, a figure of speech!). In 1978, our lab at MIT was building a management information systems for generals, CEOs, and 6-year-old children, namely an MIS system which could be learned in less than ten seconds. As part of this project we received NEC's top-of-the-line, speaker-dependent, connected speechrecognition system. Like all such systems, then and now, it was subject to error when the user showed even the lowest level of stress in his or her voice. Mind you, this would not necessarily
be audible to you or me. ARPA, the sponsors of that research, made periodic "site visits" to review our progress. On these occasions, the graduate students prepared what we thought were bug-free demonstrations. We all wanted the system to work absolutely perfectly during these reviews. The very nature of our earnestness produced enough stress to cause the system to crash and burn in front of the ARPA brass. Like a self-fulfilling prophecy, the system almost never worked for important demos; our graduates were just too nervous and their voices reflected their condition. A few years later, one student had an idea: Find the pauses in the user's speech and program the machine to generate the utterance, "ah ha," at judicious times. Thus, as one spoke to the machine, it would periodically say: ah hha, ahhh ha, or ah ha. This had such a comforting effect (it seemed that the machine was encouraging the user to converse), that the user relaxed a bit more and the performance of the system skyrocketed. Our idea was criticized as sophisticated charlatanry. Rubbish. It was not a gimmick at all, but an enlightened fix. It revealed two important points: For one, not all utterances need have lexical meaning to be valuable in communications; for another, some utterances are purely protocols, like network handshaking. Think of yourself on the telephone. If you do not say "ah ha" to the caller at appropriate intervals, the person will become nervous and, ultimately, inquire: "Are you there?" You see, the "ah ha" is not saying "yes," "no," or "maybe," but is basically transmitting one bit of information to say, "I'm still here and listening." The reason for revisiting this long story is that some of the most sophisticated people within the speech recognition community failed to understand what I have just illustrated. In fact, in many labs today, speech recognition and production are still studied in different departments or labs! I frequently ask, "why?" One conclusion is that these people are not interested in communication, but transcription. That is to say, people in speech recognition wish to make something like a "listening" typewriter which can take dictation and produce a document. Good luck! People are not good at that. Have you ever read a transcription of your own speech? Instead of transcription, let's look at speech as an interactive medium, as part of a conversation. This perspective is well presented in the forthcoming book by Chris Schmandt entitled Voice Communication with Computers: Conversational Systems, (Van Nostrand Reinhold, 1994).
Table Talk
Talking with computers goes beyond speech alone. Imagine the following situation. You are sitting around a table where everyone but you is speaking French, but you do not speak French. One person turns to you and says: "Voulez-vous encore du vin?" You understand
http://www.media.mit.edu/~nicholas/Wired/WIRED2-03.html (2 of 3) [28-4-2001 14:10:57]
perfectly. Subsequently, that same person changes the conversation to, say, politics in France. You will understand nothing unless you are fluent in French (and even then it is not certain). You may think that "Would you like some more wine?" is baby-talk, whereas politics requires sophisticated language skills. So, obviously the first case is simple. Yes, that is right, but that is not the important difference between the two conversations. When the person asked you if you wanted more wine, he or she probably had an arm stretched toward the wine bottle and eyes pointed at your empty wine glass. Namely, the signals you were decoding were parallel and redundant, not just acoustic. Furthermore, all the subjects and objects were in the same space and time. This is what made it possible for you to understand. The point is that redundancy is good. The use of parallel channels (gesture, gaze, and speech) should be the essence of human-computer communications. In a foreign land, one uses every means possible to transmit intentions and read all the signals to determine even minimal levels of understanding. Think of a computer as being in such a foreign land, ours, and being expected to do everything through the single channel of hearing. Humans naturally gravitate to concurrent means of expression. Those of you who know a second language, but do not know it very well, will avoid, if at all possible, using the telephone. If you arrive at an Italian hotel and find no soap in the room, you will go down to the concierge and use your best Berlitz to ask for soap. You may even make a few bathing gestures. That says a lot. When I talk with my computers in the future, I will expect the same plural interface. If I do too much talking at one of my computers, I will not be surprised if it asks me one day, "Can we have a conversation about this?" Next Issue: The Fax of Life [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.03 March 1994.]
NEGROPONTE
In contrast to the gain in graphical richness of computers, speech recognition has progressed very little over the past fifteen years. And yet, fifteen years from now, the bulk of our interaction with computers will be through the spoken word. It is time to move on this interface backwater and correct the fact that computers are hearing impaired. In my opinion, the primary reason for so few advances is perspective, not technology. People have been working on the wrong problems and hold misguided views about the voice channel. When I see speech recognition demonstrations or advertisements with people holding microphones to their mouths, I wonder: Have they really overlooked the fact that one of the major values of speech is that it leaves your hands free? When I see people with their faces poked into the screen - talking - I wonder: Have they forgotten that the ability to function from a distance is a reason to use voice? In short, most people developing speech systems need a lesson in communications interfaces.
Aural Text
Oversight number two: Speech is more than words. Anyone who has a child or a pet knows that what is said can be as important as how it is said. In fact, dogs respond to tone of voice more than any innate ability to do complex lexical analysis. I frequently ask people how many words they think their dogs know and I have received answers as high as 500 to 1,000. I suspect the number is closer to 20 or 30. Spoken words carry a vast amount of information beyond the words themselves, which is something that my friends in speech recognition seem to ignore. While talking one can convey passion, sarcasm, exasperation, equivocation, subservience, exhaustion, (and so on) with the exact same words. In speech recognition, these subcarriers of information are ignored or, worse, treated as bugs rather than features. They are, however, the very features that make speaking a richer medium than typing.
and maybe only mine. The presumed need for speaker independence is derived in large part from earlier days, when the phone company wanted anybody to be able to talk to a remote database. The central computer needed to be able to understand anybody, a kind of "universal service." Today, we can do the recognition in the handset, so to speak. What if I want to talk with an airline's computer from a telephone booth? I call my computer or take it out of my pocket and let it do the translation from voice to ASCII. Once again, we can do a great deal at the "easier" end of this axis. Finally, connectedness. Surely we do not want to talk to a computer like a tourist addressing a foreign child, mouthing each word as if in a locution class. Agreed. And this axis is the most challenging in my mind. But even here, there is a way out in the short term: Look at vocabulary as multiword utterances, not as just single words. These utterances can be short, slurred phrases of all kinds, which endow the machine with sufficient connected speech recognition to be very useful. In fact, handling runtogetherspeech in this fashion may well be part of the personalization and training of my computer. My purpose is not to argue any one of these three points to death, but to show more generally that one can work much closer to the easiest corner of speech space than has been assumed and that the hard and important problems are elsewhere. Said in another way: It is time to look at talking from a different perspective. Next Issue: Talking WITH Computers [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.02 February 1994.]
NEGROPONTE
Have you ever wondered why your computer screen has jagged lines? Why do pyramids look like ziggurats? Why do uppercase E, L, and T look so good, yet S, W, and O look like badly made Christmas ornaments? Why do curved lines look like they've been drawn by someone with palsy? I've met people who think these staircase artifacts are intrinsic to computer displays - more or less a given with which they must live. After all, we've watched enough Westerns and seen stage- coach wheels go backwards, and we don't flame the movie studios. Well, this month's column is my flame to almost every computer manufacturer and software developer on the planet. People are tired of your jaggies. It's time to correct your offensive fonts and graphics. And, as you know, it is not hard to do. Here's an irony. Remember those funny fonts taken from magnetically sensitive characters on checks? One font was even given a name: MICR (my guess is that this is an acronym for Magnetic Ink Character Recognition). During the 1960s and 1970s, graphic designers frequently used MICR to cast a look and feel to the electronic age. We are doing this all over again in the 1980s and 1990s with aliased fonts (so far nameless), frequently used in graphic design to signal "computer." Before this mascot does get a name, let's correct it, because today there is no need for lines and characters to be anything less than print quality and perfectly smooth. I won't go into the added irritation we encounter in animation. As an image moves, the jagged little steps come and go, increase and decrease in number, and move in all sorts of counterintuitive directions. The passenger beside me on the plane, as I wrote this, was playing a golf game on his laptop and did not seem to be fazed by the fact that the golf club went from being perfectly straight to being a staircase with moving steps. When I pointed this out, he suddenly found the game too annoying to play (sorry about that). He reacted with disbelief when he learned how unnecessary this condition is.
I would have expected Japan to be a greater force in this area, because Kanji benefits even more than the Latin alphabet from the resolution added by graytone. I would have expected Europe to be more active, since there is much EC legislation concerning computer screen characteristics. I would have expected the United States to implement anti-aliasing, if only because its theoretical and practical roots are in America. But, alas, the ambivalence is worldwide. As we rush into a world of sophisticated games, electronic books, and multimedia everything, we will invariably see more and more jaggies and more and more people will assume they are intrinsic. They are not. If you don't believe me, ask a computer science friend. There really is no excuse any more. So wake up: Apple, IBM, DEC, HP, Microsoft, and all you other companies. We're tired of the jaggies. Next Issue: Talking to Computers [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1994, WIRED Ventures Ltd. All Rights Reserved. Issue 2.01 January 1994.]
NEGROPONTE
I never knew the meaning of "pleonasm" until I recently listened to a lecture by Mike Hammer (not the detective, but the world's leading "re-engineer"). In his typically animated fashion, Hammer presented "corporate change" as an oxymoron on its way to becoming a pleonasm. Basically, a pleonasm is a redundant expression like "in one's own mind." It is the opposite of an oxymoron, which is an apparent contradiction like "artificial intelligence" or "airplane food." If prizes were awarded for the best oxymorons, "virtual reality" would certainly be a winner. Freshman physics teaches us about real versus virtual images. Classicists get a more complex dose of the same in their reading of Plato. But virtual reality - or VR - is becoming a pleonasm. If the words "virtual reality" are seen not as noun and adjective but as "equal halves," the logic of calling VR a pleonasm is more palatable. Basically, VR makes the artificial as realistic as the real. In flight simulation, its most sophisticated and longest-standing application, VR is more realistic than the real. Pilots are able to take the controls of fully loaded passenger planes for their first flight because they have learned more in the simulator than they could in a real plane. In the simulator, a pilot can be subjected to rare situations that, in the real world, would require more than a near miss. I have often thought that one of the most socially responsible applications of VR would be its required use in driving schools. Virtual reality can place drivers in perilous predicaments - on a slippery road, a child darts out from between two cars - that they may encounter in their cars. All of us hope we are never faced with such situations, and none of us knows how we might react. VR allows one to experience a situation "with one's own eyes" (another pleonasm). As the French journalist Rene Doutard wrote, "Courage is having done it before."
system with a parts cost of less than US$25. The dress code for VR is a head-tracking helmet with goggle displays. The principle is simple: Put data where the person is looking and nowhere else. In donning such a display, the general locale of your gaze is a given and elementary optics can move an image from the tip of your nose to infinity. For a computer-graphics jock, the measures of reality are the numbers of polygons and/or edges a given image has, and the ability to apply textures to those images (considered cheating by some). Should you ask yourself, "What is the optimum number of edges and display resolution needed for photo-realistic imaging?" the answer is probably near you as you read this. Look out a window and imagine that window is a display. The argument will be made that head-mounted displays are not acceptable because people feel silly wearing them. The same was once said about stereo headphones. If Sony's Akio Morita had not insisted on marketing the damn things, we might not have the Walkman today. I expect that within the next five years more than one in ten people will wear head-mounted computer displays while traveling in buses, trains, and planes. That number could include pilots -who could be landing planes in low visibility wearing goggles that subtract the real fog. By the way, don't believe for a moment that all of our perceptions are derived from what we see. One of the most frequently cited studies conducted at the Media Lab was authored by Professor Russ Neuman, who proved that people saw a better picture when sound quality was improved. This observation extends to all of our senses as they work cooperatively. Some Department of Defense prototypes have shown that minor and random vibrations of a tank simulator platform induce an uncanny sense of extra visual realism.
This is commonplace today. But what I remember so vividly is that everyone - not most people, but literally everyone would, after putting these glasses on for the first time, immediately move their heads from side to side, looking for the images before them to reflect their expectations of realistic motion parallax. Usually the system did not perform. That human response, the "neck jerk" reaction, says it all. In VR, the frequency response of the system will be almost all that counts. While I am not aware of any such studies that would support the claim, I suspect that rapid response can be traded for resolution. If you look to the right or the left, you will be very dissatisfied if the landscape moves along jerkily, with spatial and temporal aliasing, because aliased VR is the oxymoron while VR itself will be the pleonasm, whether we like the ring of the words or not. Next Issue: Aliasing: The Technical Blindspot of the Computer Industry [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.06 November 1993.]
NEGROPONTE
The fact that, in one year, a 34-year-old former Michigan cheerleader generated sales in excess of $1.2 billion did not go unnoticed by Time Warner, which signed Madonna to a $60 million "multimedia" contract last year. At the time, I was startled to see "multimedia" used to describe a collection of unrelated traditional print, record, and film productions. Since then, I see the word almost every day in the Wall Street Journal, often used as an adjective to mean anything from interactive to digital to broadband. It would seem that if you are an information and entertainment provider who does not plan to be in the multimedia business, you will soon be out of business. What is this all about? It is about both new content and looking at old content in different ways. It's about one intrinsically interactive medium, made possible by a digital lingua franca: bits. And it's about the decreasing costs, increasing power, and exploding presence of computing in our daily lives: 47 percent of all PCs sold in 1992 went to the home market. This technological push is augmented by an aggressive pull from media companies, which are selling and reselling as many bits as possible, including Madonna's (which sell so well). This not only means reuse of data, music, and film libraries but also the expanded use of text, audio, and video for as many purposes as possible, in multiple packages and through diverse channels.
Anyhow, it is interesting to observe that during the 1970s, "multimedia" meant "nightclubs." It carried the connotation of rock music plus light show. In 1978, when we showed a full-color, illustrated page of text on a computer screen, people gasped in astonishment when an illustration turned into a sound-synch movie at the touch of a finger. Some of today's best multimedia titles, like Robert Winter's Mozart, are high production value renditions of sloppy but seminal experiments from the 1970s. What today's titles share with the past is the simple idea that three discrete streams of data audio, video, and text - explicitly meet on the screen with an order imposed by astute synchronization. The current challenge in designing multimedia product is very much the organization of time, or what might be called "page layout" in the space of X, Y, and T. But multimedia can mean more.
moving elements, like a person walking across a stage, drop out in favor of the temporarily stable ones. What occurs in this example of "multimedia" is important: Movement from one medium to the next requires transcoding one dimension (time) into another dimension (space). We have simple examples in our daily lives, where, for instance, a speech (the acoustic domain) is transcribed with punctuation (the text domain) to render a small semblance of intonation. In the script for a play, much more is added in parentheses to characterize action. True multimedia, not all of which has to be explicit sound and light on the screen (some of it can be in your head), will include the automation of transcoding from one medium to the next because people will not be satisfied with the assumption that they only can be seated in front of an array of playback machines lashed together by a gaggle of wires. We are just as likely to want teleconferencing output, for example, on a Personal Digital Assistant as we are on a fullblown "virtual reality" system worn over our heads. In short, ubiquity is more important to multimedia than is explicit immersion. Next Issue: Virtual Reality - Oxymoron or Pleonasm? [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.05 October 1993.]
WIRED 1.04 - Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV
NEGROPONTE
Is Bill Gates using John Sculley's speeches to guide his alliances? The makers of computer hardware and software evince uncanny synchronism in their lusting toward the cable industry. This is not surprising when we consider that ESPN has more than 60 million subscribers. Microsoft, Silicon Graphics, Intel, IBM, Apple, and HP have all entered major agreements with the cable industry. The object of this ferment is the set-top box, currently little more than a plug adapter but destined to be much more. At the rate things are going, we may soon have as many types of set-top boxes as we now have infrared remotes. Such a smorgasbord of incompatible systems is a horrible thought. The passion for this box stems from its potential function as, among other things, a gateway through which the "provider" of that box and its interface can become a gatekeeper of sorts, charging onerous fees for information as it passes through the gate and into your home. While this sounds like a dandy business, it is unclear if it's in the public's best interest. Worse, a settop box itself is short-sighted and the wrong cynosure. We should broaden our vision and set our sights instead on open-architecture television (OATV).
WIRED 1.04 - Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV
acceptance of this box, the idea is to aggrandize it with additional functions - give it long pants, so to speak. But this cannot be the right approach.
WIRED 1.04 - Set-Top Box As Electronic Toll Booth: Why We Need Open-Architecture TV
Why not learn from Wang, Data General, and Prime? What those once high-flying companies had in common was a total disregard for open systems. Open systems exercise the entrepreneurial part of our economy and call into question proprietary systems and broadly mandated monopolies. In an open system we compete with our imagination, not with a lock and key. The result is not only a large number of successful companies, but a wide variety of choice for the consumer and an ever more nimble commercial sector, one that can change and grow. This may not work for automobile manufacturers but it does for the computer industry and it can work for television. The reason is simple: None of us give a damn about the box; we care about programming. Just as software and system services drive the computer industry, programming and intelligent browsing aides will drive the television industry. Ask yourself: Under which scenario will we see new media and the most innovative content - one featuring an enlarged set-top box, or one featuring open architecture television? Next Issue: Modern Multimedia [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.04 August 1993.]
NEGROPONTE
When I was an Assistant Professor of Computer Graphics at MIT in the late '60s, my career had little meaning at a dinner party. Computers were totally outside everyday life. I recall one Boston Brahmin who thought that a joy stick was a sex object. Today, I hear 60-year-old tycoons boasting about how many bytes of memory they have in their Wizards, and the capacity of their hard disks. Others talk half-knowingly about the speed of their processors (thanks to "Intel Inside") and affectionately (or not) about the flavor of their operating systems. I recently met one socialite who provides consulting services; her business card reads "I do Windows." Bandwidth is different; it remains a mystery to most. This is true because we often have too much when we don't need it or too little when we do. In addition, we scarcely understand the trade-off between bandwidth and intelligence. If computer companies were the only players in our wired lives, we would experience a greater tendency to compute (apply intelligence) at the periphery of the network rather than shipping bits back and forth in wholesale fashion. The computer culture has learned from human interface research that the most supreme form of interaction is the lack of it. Less is more.
fiber will come into being automatically through the forces of common sense and Mother Nature.
WIRED 1.02 - The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?
NEGROPONTE
The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?
The FCC has decided to give television broadcasters 6 MHz of additional spectrum for HDTV on the condition that currently used spectrum is returned within fifteen years. It is a foregone conclusion, thank goodness, that HDTV will be digital, and will probably operate at 20 million bits per second. Now, imagine that you own a TV station and the FCC just gave you a 20 million bits-per-second license. You have just been given permission to become a local epicenter in the bit radiation business. What would you do with your license? Face it, the very last thing you would do is broadcast HDTV - if only because the programs would be scarce and the receivers few. Anyway, as I hope I made clear in the last issue, television's DNA is not connected to picture resolution. So this is what you might do: First, with a little cunning, you'd probably realize that you could broadcast four channels of digital, broadcast-quality, standard NTSC television, thereby increasing your audience share and advertising revenue. Upon further reflection, you might decide to transmit three TV channels, two digital radio signals, a news data channel, and a paging service. It continues. At night, when few people are watching TV, you might use most of your license to spew bits into the ether for delivery of personalized newspapers to be printed in people's homes. Or, on Saturday, you might decide that resolution counts (say, for a football game) and devote 15 million of your 20 million bits to high-definition transmission. Literally, you will be your own FCC for those 20 million bits, allocating them as and when you see fit. That is, if the Bit Police don't stop you. To be perfectly clear, this is not what the FCC originally had in mind when it allocated HDTV spectrum among existing broadcasters. The body politic, particularly groups hankering for spectrum, will scream bloody murder when it realizes that TV stations just had their current
http://www.media.mit.edu/~nicholas/Wired/WIRED1-02.html (1 of 3) [28-4-2001 14:11:12]
WIRED 1.02 - The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?
broadcast capacity increased by 400 percent, at no cost, for the next fifteen years! Does that mean we should send in the Bit Police to make sure that this new spectrum is used only for HDTV?
WIRED 1.02 - The Bit Police: Will the FCC Regulate Licenses to Radiate Bits?
during its creation. The text is in computer-readable form. The images are scanned and the layout produced on a desktop publishing system. The style of WIRED's creation is the epitome of both a digital process and a digital lifestyle (my contributions, for example, are destined to be written from the seat of an airplane and sent to WIRED via e-mail). Only when the final pages are output to film for printing does the digital representation vanish. Let's pretend that instead of providing WIRED in hard copy, we could transmit it in bits. The subscriber could transcode them into print form or more interactive soft copy. We would create a very different magazine - among other things, we would provide varying levels of detail, our cutting room floor would be empty, and the magazine (if we still use the word) would be conversational. The message is that all information providers will be in a common business - the bit radiation business - not radio, TV, magazines, or newspapers. I do not believe there will be a Bit Police. The FCC is too smart. Its mandate is to see advanced information and entertainment service proliferate in the public interest. There is simply no way to limit the freedom of bit radiation any more than the Romans could stop Christianity, even though a few brave and early data broadcasters will be eaten by the Washington lions in the process. In the last issue, my e-mail address was listed as Negroponte@Internet. That bogus address was a misjudgment, meant to leave the impression that most of my communications with WIRED are by e-mail, which is true. The above address is real. That does not mean I will answer all fan or hate mail, but at least I will see it. Next Issue: Debunking Bandwidth [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1993, WIRED Ventures Ltd. All Rights Reserved. Issue 1.02 April 1993.]
NEGROPONTE
Showgun
During the late sixties, a few visionary Japanese asked themselves what the next evolutionary step in television would be. They reached a very logical conclusion: higher resolution. They postulated that the move from black-and-white to color would be followed by filmic-quality TV, which in turn would be followed by 3-D TV. They proceeded, in their inimitable style, to develop something called Hi-Vision by scaling up TV as we know it in the analog domain. Around 1986, Europe awoke to the prospect of Japanese dominance of a new generation of television. For totally protectionist reasons, Europe developed its own analog HDTV system, HD-MAC, making it impossible for Hi-Vision, which the United States officially backed at the time, to become a world standard. More recently, the US, like a sleeping giant, awoke from its cryogenic state of mind and attacked the HDTV problem with the same analog abandon as the rest of the world. However, this awakening occurred at a time when it was possible to think about television in the digital domain. The perseverance of a few has resulted in our nation being the sole official proponent of a purely digital process. That's the good news. The bad news is we blew it. We made the same mistake as Japan and Europe when we decided to root our thinking in high definition. Despite a great deal of hand waving, the truth is that all these systems (currently under consideration for a national standard by the Federal Communications Commission - which President Clinton could then change) were constructed on the premise that achieving increased image quality is the relevant course to be pursuing.
This is not the case, and there is no proof to support the premise.
Reckless Nationalism
TV is so bound in culture that even some very democratic countries legislate the number of hours that foreign programming is allowed on their domestic channels. Less democratic nations use TV for propaganda and control. This blending of the cultural with the potentially political has crept into the technical arena and, for a variety of gratuitous economic reasons, we are presented with the likely nightmare that Japan, Europe and the United States will go in totally different directions vis-a-vis TV. However, my bet is that 1993 will be the year these diverging courses correct themselves and converge. Europe, Japan and the US will collaborate, and being digital will be recognized, finally, as a truly evolutionary step. Why am I optimistic after outlining such gloomy polemics? For several reasons, all relating to one question: Where is the action? Nintendo, Sega, Apple, and IBM - not your run-of-the-mill TV makers - will present us with a burst of multimedia products in the home very soon. At least 200,000 direct broadcast satellite receivers, fully digital, will hit the stores in time for Christmas. And cable operators are trying to get digital TV even sooner than that. Namely, there will be an outpouring of digital video services that have absolutely nothing to do with HDTV, and they will be in place long before action can be taken on any FCC decision if, in fact, one is made. Finally, a small band of multinational people are making great progress in the standards arena. The roots of digital/video harmony reside in the Motion Picture Experts Group, MPEG, which is a bona fide part of ISO, the International Standards Organization.
http://www.media.mit.edu/~nicholas/Wired/WIRED1-01.html (2 of 3) [28-4-2001 14:11:14]
NEGROPONTE
Browsing is an obvious idea, but it is not necessarily the right one. Too much of the Net's future is staked on this unchallenged notion. And the sooner we stop relying on this concept, the better. Just think: How much browsing do you do in real life, or, as John Perry Barlow would say, in "meatspace"? Most working adults don't have time to spare. Browsing is better suited to the confines of a doctor's waiting room, an airplane seat, or a rainy Sunday afternoon. Rarely does browsing suggest the serious, productive use of one's time. Rather, it suggests another era, when work, home life, and vacations were less entwined than they are today. So what happened? Why did we suddenly elevate this faulty, serendipitous, and almost haphazard process to its current prominence - even predominance - on the Internet? The verb browse is derived from the behavior of hungry animals who, in winter when pasture is barren, forage for tender shoots and the buds of trees and bushes. This implies that there isn't a lot to choose from and that what is good needs to be actively sought out. But browsing takes time - the one thing most of us don't have. For example, I do far less window-shopping than I did when I was young (and yes, I miss it). Undeniably, browsing can be fun and useful but, as with tourism, only so much and so often. Funny how we use the words cruising and surfing to describe our behavior on the Web. How often do we invoke the words learning or engaging when we browse? The Web is a digital landmark, as important as the Net itself. Its inventors, Tim Berners-Lee and his colleagues, will probably never fully realize how important their contributions were, and will continue to be, because the Web can be viewed in so many different ways. For me, it's less about multimedia or hyperlinks and more about turning the Net inside out. Instead of sending email to an individual - or to a list of individuals - I can now post a message and invite people to look it over. Sure, we've always had bulletin boards, telnet, and ftp on the Net, but the Web created a new and more accessible subworld, one more like the window-shopping experience than the original message-passing rubric. And in a way, that's a shame. Think of the change this way: the Internet is now like a city - people go places, visit
http://www.media.mit.edu/~nicholas/Wired/WIRED4-05.html (1 of 3) [28-4-2001 14:11:17]
communities. In fact, we even call our own pages "home." But when we arrive at a place and try to make things happen, we often end up frustrated.
people. When people do use the Net, it will be for more suitable purposes: communicating, learning, experiencing. The idea that machines, not people, will dominate Net usage turns the model upside down, not just inside out. Suddenly "pages," if that's even an appropriate term, will need more and more computer-readable hooks so that programs can see what you or I view from the corner of our eye. When we browse, our eyes gravitate toward images - in the future, these images will need simple digital captions. This will certainly take steam out of the Net-based advertising we know today. Simply put, our eyeballs may not be there to see it. Next Issue: Who Will the Next Billion Users Be? [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.05 May 1996.]
NEGROPONTE
The question I'm asked most often is "Will the information rich get richer while the information poor get poorer?" My answer is "No." But that reply may be too quick and simple. If you agree that the Net will have a billion users by the turn of the century, you probably have also assumed that the majority of these users will be in developed nations. After all, of the roughly 10 million host machines that exist today, more than half are in the United States. Many of the rest are in G7 nations. In fact, the 50 least-developed countries of the world - those with less than US$500 per capita GDP - currently sport 23 host machines. (Curiously, 19 are in Nepal.) My point is that the information rich today are indeed rich and the information poor are indeed poor. But this will change. Consider a country like Malaysia, where the people value education, and the government, albeit slightly despotic, promotes development in a grand way. At the moment, there are 20,000 Internet users in Kuala Lumpur, a number that is growing by 20 percent each month. At this rate, all of Malaysia (some 19.1 million people) will be online by 2000. So far, we haven't been counting these people in our billion users calculation. And Malaysia is not the only country growing at this rate.
Gung-hoism
Consider the most gung-ho person in your neighborhood, the one who enthusiastically embraces trash collection, babysitting, and a host of other local civic projects. The neighbor who comes to mind is probably the newest arrival. Said another way, the most devout among us are frequently those who have most recently converted. We're all familiar with new email users who go berserk and swamp us with interminably long and chatty messages. This can happen on a global scale and is something to ponder when you realize that India and China represent more than 2 billion people. But the difference between using computers for email and, for example, primary education is that the former may be an infatuation while the latter can provide an everlasting square meal of digital nutrition. In general I'm very optimistic, especially about the developing world rapidly "becoming digital."
http://www.media.mit.edu/~nicholas/Wired/WIRED4-06.html (1 of 3) [28-4-2001 14:11:19]
Almost half of the populations of developing nations are under 20, in contrast to less than a third in developed countries. Typically, this youth corps is considered a liability. But given the existing base of people, a large youth population is an asset as nations move forward, particularly in countries where older members of society are less literate. We all know that kids take to computers as they do to language, and that given the chance, they will jump into the digital world with passion, delight, and abandon. When PCs were only "personal computers," educational opportunities - especially in the developing world - were limited by the amount of software "second guessed" to be appropriate. With the Internet, this changes dramatically. It's no longer necessary to plot every step in advance. Kids can teach other kids around the world. Reasons for being able to read and write will become obvious.
Running such an effort would cost about as much as a few F-15s. The problem is not money, but how to do it. Under whose aegis? Unesco is too politicized, and the World Bank would want its money back. It may be time to create a new United Nations for cyberspace, an organization with a five-year half-life to make the digital world immediately available to everyone. It cannot be done country by country - governments move so slowly, and most are run by the digitally homeless, anyway. Something very new is needed. If you have a good idea, speak up. Use the email address above. Seriously. Next Issue: Object-Oriented Television [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.06 June 1996.]
NEGROPONTE
Object-Oriented Television
The Media Lab's Michael Bove believes that a television set should be more like a movie set. But movies require locations, actors, budgets, scripts, producers, and directors. What would it mean, Bove wonders, if your TV worked with sets instead of scan lines?
Storytelling
Having actors and sets hang around in our TVs isn't going to do us a lot of good unless we can tell them to do something interesting. So, in addition to objects, we need a script that tells the receiver what to do with the objects in order to tell a story. TV conceived as objects and scripts can be very responsive. Consider hyperlinked TV, in which touching an athlete produces relevant statistics, or touching an actor reveals that his necktie is on sale this week. Bits that contain more information about pixels than their color - that tell them how to behave and where to look for further instruction - can be embedded.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-07.html (2 of 3) [28-4-2001 14:11:21]
These bits-about-the-bits will resolve a problem that has beleaguered Hollywood directors faced with one-version-fits-all screens and made them envious of graphic designers, who can design postage stamps, magazine ads, and highway billboards using different rules of visual organization. Television programs could react according to the originator's intention when viewed under different circumstances (for instance, more close-ups and cuts on a small screen). You think Java is important - wait until we have a similar language for storytelling. TV is, after all, an entertainment medium. Its technology will be judged by the richness of the connection between creator and viewer. As Bran Ferren of Disney has said, "We need dialog lines, not scan lines." This article was co-authored by V. Michael Bove (vmb@media.mit.edu), Alexander Dreyfoos Career Development professor at MIT's Media Lab. Next Issue: Building Better Backchannels [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.07 July 1996.]
NEGROPONTE
Try the following: Close your eyes and plug your ears. Imagine you are your own personal computer. Try it. You can't see you, you can't hear you - you just get poked at now and again. Not great, is it? No matter how helpful you want to be to you, it's tough going. When deprived of knowing what's happening around you, all the intelligence in the world won't make you into a faithful servant. It would be frustrating for you to be your own personal computer. The backchannels are far too limiting. Two decades ago, computers were totally sensory deprived, even in the direction of computer to user. Today, the flow of information from computers to people offers a richer experience, with color, motion, and sound. However, the opposite path - from people to computers - enjoys no such amelioration. Computer inputting today is almost as tedious as it was 20 years ago. In this sense, the interface viewed as a whole has become wildly asymmetrical - lots out, little in. Your personal computer doesn't have the recognition capability of a parrot.
computer at home? That was in 1977. Given that attitude, many computer corporations sat on their digital butts, enjoying a marketplace of corporate purchasing agents - people who bought computers for others to accomplish the tasks outlined in their job descriptions. Under those conditions, users were expected to suffer the indignity of using a computer and to dutifully work around, among other things, its hearing impediment. Now, suddenly, consumers like you and I are buying more than 50 percent of all PCs to use in our homes, to assist our children, and to entertain. Under these new conditions, a deaf (and dumb) computer is not acceptable. Change will occur only when manufacturers start taking the word personal in personal computers seriously. By this I mean building speaker-dependent voice recognition (which is so much easier than speaker-independent recognition). Also, manufacturers must focus on highly interactive speech, not transcription, which even humans cannot do properly. For those readers who think life will become terribly cacophonous in the presence of machines that talk and listen, let me say that we seem to work just fine with telephone handsets in our homes and offices. And for those of you who feel it is plumb silly to talk to an appliance, recall for a moment how you felt about answering machines not too long ago. No, speech really is the right channel, and it is time, once and for all, to move with resolve.
NEGROPONTE
Shipping bits will be a crummy business. Transporting voice will be even worse. By 2020, there will be so many broadband paths into and out of your home that competition will render bandwidth a commodity of the worst kind, with no margins and no real basis for charging anything. Fiber, satellites (both stationary and orbiting), and all sorts of terrestrial wireless systems will pour bits galore into your home. Each channel will have so much spare capacity that measuring available bandwidth will make as much sense as counting photons passing through a window. Scarcity creates value. Since fiber (including transducers) now costs less than copper (except for the shortest lengths), we will be installing fiber even if we do not need the bandwidth it provides. POTS, plain old telephone service, is better served and more inexpensively installed and maintained using fiber. Japan will have it in every home by 2015. There will be such a glut of bit-transportation capacity that vendors will be giving it away to get you to buy something or just to look at advertising. And we will soon be exchanging bits among ourselves that represent almost anything but real-time voice traffic.
Voiceless telephony
Today, the telephone companies take the phone in their name far too seriously. For example, they worry about Internet-based telephony without realizing that their real problem will be the reduction of real-time voice traffic in the digital age. Our great-grandchildren will be astonished and amused when they recall the waste and financial loss incurred at the end of the 20th century playing telephone tag. Their telecommunications world will be far more asynchronous than ours and will be based mostly in ASCII, not in audio or graphic renditions of it. "Hello?" The word is with us thanks to the telephone. Early telephone operators were called hello girls. While we have no hello girls today asking, "Are you finished?" we still use hello far too often. In fact, you never really want to say hello all by itself on the telephone. It is fine for face-to-face greetings, but said on the phone, it means you don't know who is calling, or why they are calling in the first place. That makes no sense. Your digital butler should say hello, not you.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-09.html (1 of 3) [28-4-2001 14:11:25]
Furthermore, why call at all? Sure, it may be important for many purposes, often for emotive reasons. Yet consider the alternatives now available. Federal Express's Web site is a nice example. Until recently, I would call an 800 number to ask a human if the 10-digit domestic or 12-digit foreign waybill number could be traced, then I would hear typing in the background. Now, I click a few times on the company's Web site and am much more satisfied with the quick, direct reply. The Relais & Chateaux hotels have been on the Web for more than a year and a half, so I have stopped calling them. Just think: all of these transactions and many more once required phone calls. In fact, this extends to people. If your circle of acquaintances are online, you call them much less. In my own case, I place less than five calls a day and receive as few. With my mother online, we call each other less but communicate almost daily.
Mouse potatoes
I truly believe that during prime time in 2005, more Americans will be on the Net than will watch network television. NBC, CBS, ABC, Fox, and CNN could by then be doing more business on the Web than via broadcast. Under these conditions, a telephone company stands to profit handsomely. And it does not have to own content - a common belief just five years ago. CNN does not want to personalize the news. It has enough trouble gathering it from around the world - and you don't necessarily want to limit your input solely to theirs. One hundred million
http://www.media.mit.edu/~nicholas/Wired/WIRED4-09.html (2 of 3) [28-4-2001 14:11:25]
news-reading and news-watching Americans will soon realize the possibilities that can be derived from looking at 100 million different editions of the news - something the phone company could make possible. In fact, content providers are not well suited to deliver tailored news, as they are per force focused on their own. I bet you would pay your phone company a few dollars a day for a news service, perhaps print in the morning and video in the evening, whose stories combined headline news and items of personal interest. In fact, this could be an ironic example of added value: I would pay my telephone company more to give me fewer bits, but the right bits. Wouldn't you? Next Issue: Electronic Word of Mouth [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.09 September 1996.]
NEGROPONTE
One fine day a young woman went out to buy a car. The dealer convinced her to purchase a Ford Taurus for US$19,500. She said she needed to sleep on it and would come back the next day. But instead of just sleeping on it, she used the Net to inquire whether there were others, near her, who were also considering buying a Taurus. By next morning, she had found 15 people who were. Some email discussion ensued, and she returned to the dealer to say she would take the car, but for $16,500. This was so far below his price that he assumed she had made a mistake. "No sir. I have not made a mistake," she replied. "I simply failed to mention that I am buying 16 cars, not one." Delighted with the idea of selling in such volume, the dealer promptly sold the cars at her price. A buyers' cartel - as opposed to a sellers' - is almost impossible to create: too many people need to be involved. Meeting with, speaking to, calling, or finding those who may be interested is too difficult, and you probably wouldn't know who to contact anyway. Consumers of products find themselves in a poor position compared with suppliers, people who, by virtue of knowing one another, can fix prices anytime they agree. In addition, suppliers are typically few in number. The consumer's position is weakened because he or she cannot shop efficiently. The potential buyer cannot cover an area that is wide enough to be significant. This is about to change.
Sherwin Goldman for wine. Like most of us, I have few people to whom I can turn. Otherwise, I rely on critics, experts who provide evaluations. In other words, today we have only two choices: ask a friend or trust an expert. Thanks to the work of Professor Maes and her colleagues, including former students who have started Firefly Network Inc. (formerly Agents Inc.), we now have a third way to find a new film, a hip restaurant, a timely news article, or a hot Web site. The concept is called collaborative filtering - a way to tap into other people's wisdom.
Electronic word of mouth does. And it works both ways. It not only allows you to find music titles of obscure ensembles, for example, but it very quickly blackballs the bull. It means that one person's three-star restaurant can be an anathema to another. We have seen just the beginning of a new kind of Consumer Reports - done by consumers, for consumers. For Pattie Maes and company, the ultimate effect of this technology will be demonstrated when, for instance, a band signs with a big label because Firefly generated so much excitement about its music. That is, when a new product will be launched because word-of-mouth technology formed an online cartel of people who want it to be sold. Pattie Maes (pattie@media.mit.edu), a professor at the MIT Media Lab and founding chair of Firefly Network Inc., contributed to this column. Next Issue: The Digital Absence of Localism [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.10 October 1996.]
NEGROPONTE
Being Local
John Perry Barlow suggests that cyberspace secede and become a state of its own. Most people don't find this plausible. But think about it for a moment. Just about every conflict in cyberspace can be traced to a single phenomenon - the absence of locality. The Net's envelope is the whole planet. Some governments and their regulators talk about curtaining their nations from the Net, monitoring bitstreams, and banning offensive Web sites - all essentially impossible tasks. Legal control is always local, and this is increasingly so. A country like Switzerland, itself very small, gives its 20 cantons (states) and six half-cantons enormous power. The federal government keeps a low profile, so much so that I defy you to name Switzerland's head of state. In many ways, the United States is similar to Switzerland. Visitors marvel at our liquor laws whereby, state by state and city by city, regulations change. While you may not be able to buy liquor in one town, you may in the next. Decency laws are similar in the range of views they reflect. An important part of the current political debate concerns increasing the control at local levels because, we are told, people are more civic-minded when they believe they will be held accountable and when control lies close to their doorstep.
topography. There are no physical constructs like "beside," "above," "to the north of." This is obvious. But it is not so obvious to the digitally homeless who govern most countries. The tragedy of the CDA is that countries less democratic than ours have already pointed to it and said, "You see, even the Americans think the Net is smut," failing to recognize that the CDA was instantly enjoined. Sovereignty is an odd and maybe useless concept within the digital world. But the real test of sovereignty is not decency. It is money.
Digital cash
Excuse my apparent digression to a treatment of money as yet another issue of bits and atoms. What follows is an incident that caused me to think about digital money in a new way. Two years ago, I was skiing in Klosters, Switzerland. On this occasion, the first ski day of the season, I found that the paper lift ticket had been changed to a smartcard, which, snugly nestled in your pocket, is read as you approach a turnstile - certainly convenient for the mittened skier. Since these smartcards contained electronics, the ski-lift company wanted them back and required a SwF10 deposit (approximately US$8) which can be redeemed at any lift or railroad station. I ended my first day near neither. Instead, I drove to the neighboring town to visit my father in the hospital. On the way, I stopped to buy some chocolates and, while paying for them, reached into my pocket and pulled out a handful of coins, including the smartcard. Without my reading glasses, I squinted at the coins and must have looked like a struggling tourist. The cashier reached over the counter to take the exact change. First she took the smartcard, saying that it was worth 10 francs, followed by the few additional coins she needed. I was stunned. Then I noticed a pile of smartcards on the cash register behind her. "What do you do with these?" I asked. "We pay the baker," she answered. This was too much. I visited the baker, and he had far more of these ski-lift cards, which he said he used to pay for milk, flour, and delivery. Obviously, the lift company must be running out of cards. What does it do? It does what our government does. It prints more. I sure hope the cards cost less than 10 francs! Is this significant? Yes, because nobody cares; that's what is interesting. Nobody cares that these lift cards have become local currency because they are just that - local. This currency moves slowly and is restricted to a small section of a remote valley in eastern Switzerland. Now, turn those atoms into bits. Suddenly locale has no meaning. I have a global currency as long as it's attached to a trusted entity - akin to the lift company - and that entity need not be a country. Most of us would trust GM, IBM, or AT&T currency more readily than that of many developing nations because the "currency" represented by those companies is more likely to remain convertible. After all, a guarantee is only as good as the guarantor.
http://www.media.mit.edu/~nicholas/Wired/WIRED4-11.html (2 of 3) [28-4-2001 14:11:29]
The ski-lift currency moved by virtue of being in my pocket at the right time. As soon as currency becomes bits (dutifully encrypted), its reach is unlimited. In fact, while organizations like the EU struggle to achieve a single currency, cyberspace may develop its own much faster.
A new localism
Neighborhoods, as we have known them, are places. In the digital world, neighborhoods cease to be places and become groups that evolve from shared interests like those found on mailing lists, in newsgroups, or in aliases organized by like-minded people. A family stretched far and wide can become a virtual neighborhood. Each of us will have many kinds of "being local." You can almost hum it. Being local will be determined by: what we think and say, when we work and play, where we earn and pay. Next Issue: Laptop Envy [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.11 November 1996.]
NEGROPONTE
Laptop Envy
Henry Ford would be amazed by today's automobile ads. He'd find no mention of horsepower or acceleration. Instead, he'd find references to seemingly trivial accessories - automatic door locks, dimming mirrors, built-in cup holders, and the like. But he would have little cause for alarm. Form over function is often the path of mature products. It isn't necessary to mention basic features like the engine. Instead, lesser details creep to the foreground and provide character and uniqueness. Shortly, that shift will transform portable computing. My guess is that our children will never see a laptop characterized by the speed of its processor. Already we see that the form factor - the machine's physical shape - is more important than the speed of its microprocessor. My first laptop was a Sony Typecorder, released in early 1980. This svelte machine weighed 3 pounds 4 ounces, ran endlessly on four AA batteries, and offered a full-sized keyboard and built-in tape drive. Its most noticeable drawback was the one-line LCD, but I quickly got used to it. The Typecorder expected a market with journalists. And for this reason, the modem only uploaded - it literally had just one suction cup for attaching to a telephone mouthpiece. Pretty primitive, but, boy, did I get a lot of work done with it. I suddenly found use for the interstices of life, the little wasted times, where I might otherwise nod off, doodle, or daydream. In fact, I used my Typecorder to write almost every proposal to fund the Media Lab's construction. In those days, though, a laptop often became an attractive nuisance. Working on an airplane was difficult, as people interrupted me to ask what the device was. It became easier to use pencil and paper during short flights until I perfected my body language to provide an easily readable Do Not Disturb sign. In 1983, a young Japanese genius, Kay Nishi, designed the next-generation laptop, marketed simultaneously by Olivetti, Tandy, and NEC and first built by third-party Kyocera. These too were lightweight machines with full keyboards, powered by four AA batteries. But Kay's design
http://www.media.mit.edu/~nicholas/Wired/WIRED4-12.html (1 of 3) [28-4-2001 14:11:31]
had an eight-line display. Though none of the models had the same brushed-aluminum elegance of the Typecorder, they were several steps ahead and included support for a full duplex modem. I used my NEC PC801A for almost 10 years before switching to a PowerBook 180, which I still use today. But the evolution of laptops has gone somewhat downhill.
Common PINs
Now when I travel, almost everyone is pecking away at a keyboard. The one-line monochrome message has evolved into a full-color, 12-inch display. That is enormous progress, but at a powerful price. I now carry eight to ten battery packs during long trips. I won't even consider a laptop design that includes unstackable batteries. The fact that most batteries don't indicate their charge state is pathetic. It's as if the designer assumed that the laptop would always be used plugged in, and that people would travel with one spare battery at most. While advising a large Japanese firm on its future laptops during the late 1980s, I discovered that Japanese designers viewed them as movable desktops. Small homes and offices made it necessary to put a machine away and take it out again. They were designing machines that would never see a lap and would fit perfectly into a culture that drew hard lines between home and office, work and play. But portable computers are also for peripatetic, digital people. These are people who need more than a high-octane computer - they need a constant digital presence. Under these conditions, the value of some features suddenly changes. For example, lightness counts, but ruggedness counts more. I have abandoned PC card modems because their connector is too delicate; I prefer shoving the RJ-11 into the back. Today, flight attendants don't ask me what's on my lap; they ask me if it has a CD-ROM - in which case the FAA says I can't use it inflight. I doubt laptops radiate a big enough electrical field to be hazardous, but I'm certainly not going to argue, even if this falls on the ridiculous side of the safety issue. And forbid that laptops should be fully prohibited (as they were for a while on Korean Air). If that happens, there will be something new to envy and market: tempested laptops, the machines the intelligence community uses to avoid radiation leaks (so spooks and counterspooks cannot snoop from a distance).
Real envy
Laptop form factors have approached their limits. Face it - keyboard size is driven by the size of your hands: you don't want your machine to be less than 11 inches wide. The screen probably ought to be about 8 inches tall, hence the machine needs to be 8 inches deep. And, if the machine gets too thin, it will become structurally awkward, if not uncomfortable. In fact, you want a certain amount of weight so it won't slide around.
Even the display has limits. You really don't need more than 100 pixels per inch. Today, display brightness and contrast are more important than resolution - so there goes power again (until somebody invents a good reflective display). But I do have one new requirement - something that planes and boats have and cars soon will. I want my laptop to know where it is. At a basic level, this means knowing about time and time zones. However, I mean something much more refined, including the ability to correlate longitude and latitude with cities, so that my laptop will know what town it's in, what language to use, what local telephone numbers to dial, and what protocols to use for Net access. Let it worry about changing dial tones or the need to use pulse versus touch tone. Computer vendors: You have the form factor about right. Stop producing smart-looking, powerhungry machines, and move toward simple-to-use, smart-acting machines. A simple start is letting my laptop know where it's situated. Next Issue: The Future of Paper [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1996, WIRED Ventures Ltd. All Rights Reserved. Issue 4.12 December 1996.]
NEGROPONTE
Joe Jacobson, coauthor of this article, believes that paper is a medium for the future. A medium that will build on its current ubiquity, but in an exciting and revolutionary way. qHow important are paper and ink in today's world? One in seven US patents makes mention of either paper or ink - more than make mention of any type of electronics! Hard to believe? Look around your office or home and count the number of items that have some form of print on them, then compare that with the number containing chips. The phenomenal readability and economy of printed ink on paper compels us, even in the digital age, to mark our behavior in this age-old manner. There is no lag when going from page 1 to page 44 of a book and then back to the appendix. So, too, with a newspaper. The presentation is immediate. No start-up, no logon, no button click, just paper where and how you expect it. Ink is great because every page and object gets its own. You don't have to go to a special corner of your desk to see ink. It's everywhere.
Electronic ink
One disadvantage to ink is that it's tough to erase. What we need is electronic ink that can be printed as freely onto as many different surfaces as traditional ink, but that is electronically mutable. It should be able to get up and walk away and change its shape, color, or intensity. Joe's ink can do all this. His secret takes a page from carbonless paper. The back of carbonless paper has a thin coating composed of tiny capsules filled with clear ink. These capsules, about 1 million per square inch, are then broken with the pressure of your pen. When the clear ink oozes out the back, it chemically changes a colored ink on the page underneath. Now, put that thin coating on the front of the page, and instead of putting ink in those capsules, imagine stuffing them with ping-pong balls one one-thousandth of their normal size, black on one side and white on the other. Then add some lubricant. Assuming you can control the rotation of the contents of each capsule - independently, electronically, and with the knowledge of where it's facing - you have electronic and reusable paper.
http://www.media.mit.edu/~nicholas/Wired/WIRED5-01.html (1 of 3) [28-4-2001 14:11:33]
Given that the flat-panel display market is US$30 billion per year and growing, Joe is not alone in his quest. Enormous energy and thought is being given worldwide to making better computer displays. The current standard is the thin-film transistor LCD. It draws 2.6 watts, costs about US$1,000, and is constructed on glass. TFT displays are expensive because their million or more transistors are spread over the large screens. They consume generous amounts of power because the TFT backplane eats about a watt, as does the required backlight (transmissive LCDs let through less than 20 percent of the light). Because of the glass sandwich they are packed in, LCDs are not as rugged and cannot be used as flexibly as they should be. Technical improvements can still be made, and electronics companies around the world are investing billions of dollars in research and manufacturing facilities to do so. So, how can Joe compete with these deep-pocketed giants? Simple: he looks at the problem differently. It's not a display he is building. It's ink. The advantage of his mind-set is that ink is more general than paper. It can go on almost anything, and it-s cheap. To make a display, just add a grid of addressing lines - which, by the way, is just another type ink (of the conductive variety) - to control the behavior of your e-ink.
Radio paper
It turns out that the conductive inks used to make e-paper can function as radio antennas.
http://www.media.mit.edu/~nicholas/Wired/WIRED5-01.html (2 of 3) [28-4-2001 14:11:33]
Other inks used in e-paper can be turned into radio transistors. This makes "radio paper," which can be as thin as notepad stock and sit on a coffee table or in your pocket, receiving FM news broadcasts. It "typesets" itself - every hour or day - with the latest news. With e-ink, a single piece of paper displays the news for years. By extension, any surface can now be modified into a display. Wallpaper of the future will be sold by the gallon in one customizable color, billboards will be painted once, wine labels will tell you when to drink the bottle, T-shirts will be watches, and our trees might live a little longer. This paper was coauthored with Joe Jacobson, assistant professor of Media at the MIT Media Lab. Next Issue: Pay Whom Per What When [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.01 January 1997.]
NEGROPONTE
When you fly your dog across the Atlantic, you pay a fixed fee. By contrast, inter-European airfare is determined by your dog's weight. Now, imagine that you couldn't buy an individual airline ticket for yourself or your dog. Instead, suppose airlines offered an annual pass that covered an unlimited number of flights. Or, imagine that you could buy individual tickets and the price was determined by your weight. Americans can't seem to comprehend the British National Health Service, yet they pine for the simplicity of Steve Forbes's flat tax. We hold strong if seldom consistent views on how we should pay for things. The digital world has created an opportunity to totally rethink billing methods. In turn, we will be forced to revisit the fundamental concepts supporting service cost and customer value. Most debates today on whether and how the Internet should be tariffed are mere reruns of debates that already have been played out in the world of atoms. Tomorrow's debates will be different. They will focus on issues unique to the world of bits. Unlike atoms, bits aren't consumed by consumers. They can regenerate - infinitely.
Subscribing to subscription
During the next few years, we will witness an explosion of payment methods. Yet, unlike the 40odd calling plans offered by your cellular telephone provider today, payment plans will combine two principal ideas: flat fee and pay-for-use. Neither of these is superior to the other, but we'll see innovation in both. The mind-set of most netizens, both consumers and service providers, supports a flat fee - a fact the European telcos still don't understand. But the digerati are at fault as well for not recognizing the rush toward pay-for-use. Both can, and will, benefit the consumer. From the consumer's point of view, there are two apparently contradictory arguments in support of flat fees: certainty and serendipity. People like to know in advance what something will cost, even if the flat fee results in a cost that may be higher than they would have paid "by the
http://www.media.mit.edu/~nicholas/Wired/WIRED5-02.html (1 of 3) [28-4-2001 14:11:35]
meter." Also, people want to browse, window shop, or find some unexpected treasure without the sound of a meter ticking. From the seller's point of view, flat rates offer even greater advantages. First, cost savings. Fifty percent of the price of a phone call covers billing - the cost of a call is cut in half right away by changing billing rates to a flat fee. Second, the relative certainty of income. The cost of a magazine subscription - typically much less than the price of a year's worth of newsstand issues - serves as an excellent example. The information provider is guaranteed a certain amount in sales and, further, gets paid in advance - both of which help cash flow. The more the provider has to invest in advance of sales, the better this appears.
Digital dumping
Japan consistently has been accused of dumping, whether it's supercomputers or semiconductors. The complaint results from the allegedly predatory practice of taking huge losses until the competition is obliterated, at which point a completely monopolistic position can be taken resulting in exorbitant fees. American trade associations cry foul. Congress is never far behind. Yet Netscape emerged and gave its browser away for free. Now, with 70 percent of the world's market share, the company charges US$49 and up per copy. Not a peep from anyone. Is this because Americans whine about dumping only until we do it successfully? No. It's because there is an unspoken acknowledgment that the rules of trade have changed. Bits aren't sold the same way atoms are sold. Netscape introduced an entire new class of payment. Instead of a one-off payment for a given capability (x) or a usage-independent subscription (x/t), Netscape pioneered the idea that what you pay for is effectively the rate of change in functionality (dx/dt, for the left-brained). All forms of usage-independent pricing, however, have their downside. If you use something rarely, why pay a monthly overhead? As someone who drives little, I find the Swiss and Greek
http://www.media.mit.edu/~nicholas/Wired/WIRED5-02.html (2 of 3) [28-4-2001 14:11:35]
systems of annual road fees far less attractive than the French and Italian toll systems where you pay as you go. Alas, the cost-savings argument of a fixed fee is rapidly disappearing for suppliers. This is due to the falling price of computer cycles and the introduction of new forms of electronic payment, both of which help reduce the cost of transactions to virtually zero. Today it costs a dollar to process a check and 25 cents to handle a credit card transaction. When payment systems cost a penny, the case for fixed-fee quickly erodes. However, the real driver for pay-for-use is more subtle: it is the opportunity to tie payments more closely to customer value, as discussed in the next issue. This article results from conversations at CSC Index Vanguard meetings with Richard Pawson (rpawson@csc.com). Pawson, who coauthored much of the text, is director of research for the CSC Index Foundation. Next Issue: Pay Whom Per What When, Part II [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.02 February 1997.]
NEGROPONTE
The search for an efficient means to handle micropayments has opened all sorts of possibilities on the Net. One of my favorite examples stems from a game under development by Rocket Science Inc. It is a Dungeons & Dragons-style role-playing game, which is given away and run over the Net at nominal or no cost. How does Rocket Science make any money? Here's how. You find yourself in a beautifully rendered medieval castle, face-to-face with a green, smokepuffing, long-toothed dragon. You (actually your avatar) are dressed in a terry-cloth bathrobe, which is fine for stepping out of a hot shower but crummy for fighting dragons. Then you notice some nicely polished armor hanging on the dungeon wall. Guess what? You can rent it for five cents to fend off the monster. Significantly, the vendor has linked the pricing closely to the value received, or at least perceived, by the customer. The developers of future adventure games might even stoop so low as to exorbitantly charge the person trapped in a corner for a spear to ward off a band of ogres.
better off in an era of value-based pricing, but this is far from true. Some cable companies offered live coverage of the November Tyson-Holyfield fight at US$9.95 per round - the longer the fight, the more you pay. Eleven rounds later it didn't look like such a good deal (although the price was capped at $50). What shall we expect next time? Twenty cents per punch or $20 per half-pint of blood spilled?
the license to own a car (which costs more than the car itself) - an effective if less-thanegalitarian approach to regulating traffic. The government of Western Australia has long employed this approach to source everything from telephones to toilet paper. Where I would like to see the technology applied is in plain old telephone service. "Mr. Negroponte, this is AT&T's international, line-load balancing system. Our loadings are light tonight, so we can offer you an hour's conversation with your son in Italy for just $5. Press 1 to place the call." "Hello AT&T, this is Nicholas Negroponte. I'd like an hour's videoconference at 128 Kbps with my mother in London within the next 48 hours. Any time of day is OK. I'm offering $10. Call me back when you're ready to place the call." This article results from conversations at CSC Index Vanguard meetings with Richard Pawson (rpawson@csc.com). Pawson, who coauthored much of the text, is director of research for the CSC Index Foundation. Next Issue: Dear PTT [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.03 March 1997.]
NEGROPONTE
Dear PTT
Most Americans don't realize that less than 10 countries in the world have private and competitive telephone systems. The rest have government-owned monopolies. These are usually part of a post and telecommunications ministry, commonly known as Posts, Telephones, and Telegraphs, and are run by civil servants whose government employer is both a telecommunications regulator and a service provider - clearly positions of conflict. The PTTs provide evidence, once again, that sovereignty and enterprise make strange bedfellows. Because of government ownership, PTTs can get away with poor service and unilateral decision-making, while realizing handsome profits from sinful pricing. A recent six-minute local call from a pay phone in Switzerland cost me CHf17 (US$12). During the summer of 1993, the Greek government fell to the socialists, largely because the New Democrat Party threatened to privatize the phone company. This would have resulted in downsizing in an economy where around 5 percent of the population are public employees. At the time, I was told that incomplete telephone calls make up 25 percent of the Greek phone company's income. Of course, there is no way for me to check this figure. But I wouldn't be surprised. Two years earlier, I actually received a monthly $2,000 phone bill for a period of time I was not in Greece and my phone was disconnected. "Too bad," they said. "If you don't pay the bill, we will cut off your phone service - don't bother contacting any of our subsidiaries, such as the police, the judicial system, or the better business bureau."
and current politicians (or potentates) will be remembered for filling their nation's coffers. But therein lies the rub - should this money be used to patch up the general indebtedness created by politicians, or should it be invested in the people who produced it in the first place? The Turkish government boldly states that it will use the money from the sale of its PTT to cover national debt and help cope with an inflation rate greater than 100 percent per year. This seems very wrong to me. Turkish citizens have been paying high prices for poor telephone service for years. The value of a national PTT is due, in large measure, to the citizens who have been good and faithful clients. As shareholders in government and stakeholders in telecommunications, these citizens deserve better spending plans when their government receives such a large windfall. So, here is my suggestion in the form of a short open letter: Dear PTT, My sincere congratulations for your plans to privatize your phone company. But what will you do with the money? Let me offer a suggestion: connect your elementary schools to the Internet and provide as many personal computers as you possibly can. If you put as little as 10 percent of this nonrecurring revenue into wiring your schools, you would be investing in your future. Your children don't have access to enough books. Tomorrow they could have access to the world's libraries. Unlike us, they could grow up with a global perspective, seeing and learning from many different points of view. What stands between kids and education is resolve - yours. It once was money, too. But you and your government are just about to get a basketful of that. Your biggest natural resource is the human capital of your children. Surely they deserve just a fraction of the proceeds from this historic event.
Reality check
From a macroeconomic perspective, one can argue that government money has no color whether from taxes or the sale of public companies, it is all the same. However, from the taxpayers' point of view, there is a sense that government should honor its word, and a belief that, for example, a road tax should be applied to roads. Perhaps we can establish the same sense of accountability for a wired society. Only $6 billion a year is needed to meet the worldwide need for basic, primary education, which currently reaches only 80 percent of children. Unicef strives to kindle a sense of absurdity by juxtaposing this modest $6 billion against the $40 billion per year spent on golf and the $85 billion a year spent on wine. But it's difficult to tax sports and drink spent in country A for
http://www.media.mit.edu/~nicholas/Wired/WIRED5-04.html (2 of 3) [28-4-2001 14:11:39]
education costs in country B. My suggestion is largely an expedient, connecting cause and effect, using a one-time windfall as a one-time start-up cost, because it is likely that all countries will privatize their telecommunications within the next 10 years. In the United States we estimate that $10 billion to $20 billion are needed for the one-time charge to connect all K-12 schools. Vice President Gore is doing a good job of raising the sensitivity of the nation's citizens while providing incentives for companies to step in. Other nations don't fare as well because they are less digital - in terms of their citizens and their leaders. Why are they less digital? Partly because of the PTTs. Germany is a good example: the old Deutsche Telekom made it prohibitive to be online. So, dear PTT, even if my argument does not stand up on logic, I hope you'll do the right thing anyway - out of a sense of shame and guilt. Next Issue: Tangible Bits [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.04 April 1997.]
NEGROPONTE
Tangible Bits
At the age of 2, Hiroshi Ishii experienced his first PDA - an abacus. Even though this calculator was used primarily to manage Mrs. Ishii's budget, Hiroshi found many more interesting "dual uses." In fact, he used the abacus as a musical instrument, a toy train, and a back-scratcher. His mother didn't mind, and Hiroshi soon learned the "music" of addition and multiplication at the simplest level: the tune meant that the beads - predigital bits - were in use. Forty years later, Ishii is determined to carry the idea of tangible bits forward and into the problem of making the human-computer interface seamless with the physical world. This mission obviously includes sound, dual uses, and - most important - the engagement of muscles and motor skills. Although somewhat less apparent, it includes attending to the peripheral senses - call these the ambience.
Feely-touchy
Transforming human-computer interaction from abstract mousings and keystrokes into handson engagement is the challenge Ishii and his students are addressing. They are building interfaces in which bits are embodied and grasped as physical objects and surfaces. Physical icons, or "phicons,"are small objects that serve as both handles for, and containers of, information. A prototype of such an interface exists in the form of a horizontal display surface that senses the physical objects placed on it. This interface allows computer-generated video, graphics, and 3-D models to be accessed by placing a phicon on the display surface, for example. You interact with the content by manipulating the phicons, inspecting the space with "lenses,"or probing the space with "instruments."
World as interface
Yet there is a world of sensation beyond that which can be grasped with our hands or stared at with our eyes. Ishii dreams of mapping Earth's nuances of warm and cold fronts, trade winds, and tidal waves into the circulating currents of his hot tub. Why? Because, he explains, experiencing a hurricane in the Bahamas as a whirlpool around your ankle or a monsoon in
http://www.media.mit.edu/~nicholas/Wired/WIRED5-05.html (1 of 3) [28-4-2001 14:11:41]
Asia as a warm spot on your shoulder blade allows your skin to become the interface between the meteorological world and you. Caught within the "painted bits" of glowing pixels, Ishii returns to his childhood abacus for a vision of future interface design, intent on using everyday physical objects and surfaces - a world full of incense bottles, writing desks, and window glass. As humans, we have myriad skills for processing information through tactile interaction with physical objects. The idea of tangible bits includes peripheral senses. Note that you often close your eyes when "feeling" something or while trying to determine the source of a particular sound. This process of concentration extends to ambience itself. You know something without "looking" at it. But while computing, all we normally touch are transducers, and what we see is always "in our face." Yet our peripheral senses and the surrounding activity are equally important. Why has the use of background displays been lost in computer interaction? Could this stem from a cultural divide that Ishii's Eastern perspective reveals?
Sicilian kitchen
The curtain rises; it is 2020. Quantum computing made the metric of qips (quadrillions of instructions per second) obsolete years ago, and computer interfaces are equivalent to the approximate age and maturity of the 1978 automobile interface. To view the true state of the art, we visit a Sicilian kitchen and look to the center table, only to find Bread. Pasta. Olive oil and an overripe tomato. Perhaps the bread knives are edged with
guaranteed-never-to-go-dull nanoceramic and the oven is fusion fired. The only glass screens in the kitchen are found on a window overlooking the garden and the oven door (both nanocleaning, of course). The only keyboard resides on the faux-vintage typewriter. And all the mice play tag with the cats. This Sicilian kitchen is digital, of course, but it is also intimate and inescapably physical. While a few frantic folk and workers of the midnight hour consume energy pills, the Sicilians take pleasure in their food and embrace its substance and its preparation. Ishii and I join the Sicilians in this quest to maintain the primacy of the physical world as interface - and strive to make the recipe books, green peppers, and wine bottles of the future proud. This article was written with Professor Hiroshi Ishii (ishii@media.mit.edu), who founded and directs the Tangible Media Group at the MIT Media Lab, and his graduate student assistant, Brygg Ulmer (brygg@media.mit.edu). Next Issue: 2B1 [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.05 May 1997.]
NEGROPONTE
2B1
There is a new force in the world: the growth of cyberspace. Inherent in this force is a breakdown of barriers. Everyone talks about crossing barriers of geography, gender, and culture. But the most important barrier is perhaps the least appreciated: the barrier of age. Empowering kids is a double whammy because they're the ones who will most effectively break down the other barriers as well. The children of the world are critical to achieving a united world. Those of us who grew up in multiracial societies are likely to be more racially unprejudiced than our parents. I see the same difference in people younger than me, who grew up in a more gender-enlightened era; many just cannot understand how much of an issue gender was in my time. I bet the kids of tomorrow will have the same feeling about nationalistic thinking. In fact, we are looking at a generation that will feel about culture the way most of us today feel about race and gender - identity and unity, being individual and plural at the same time. What's wrong with this picture is that more than 50 percent of the 1.2 billion children ages 6 to 11 have never even placed a phone call. Yet the suggestion to give the kids of the world access to technology raises an obvious question: What sense is there in providing computers and Internet access to children in nations where there is inadequate food, clothing, and medicine? The short answer: lots.
Dj vu
In 1981, French president Franois Mitterand gave author Jean-Jacques Servan-Schreiber the mandate to establish a World Center for Computation and Human Development. The idea was based on Servan-Schreiber's book The World Challenge. Simply stated, developing nations should and could leapfrog the industrialization process and jump into the trade of bits, instead of atoms. What gave this idea substance and credibility was the work of Seymour Papert, who had just
http://www.media.mit.edu/~nicholas/Wired/WIRED5-06.html (1 of 3) [28-4-2001 14:11:43]
published Mindstorms. Papert's theme of "teaching children thinking" was a natural complement to The World Challenge. And, with the initial backing of the then-wealthy OPEC, these crazy ideas started to make sense. Saudi leader Ahmed Zaki Yamani delivered a powerful address on human development that fall in Vienna. Paraphrased, he said, don't give a poor man fish, give him a fishing rod. The leap from a fishing rod to a personal computer was, for some of us, easy. The center's work focused on the use of computers for primary education in developing nations. The first site was a school outside Dakar, Senegal. This small experiment was just terrific; the kids had most fun teaching the principal. Kids from the jungle learned faster than kids from the city. The second location was Colombia; it had the full personal commitment of President Belisario Betancur Cuartas. For a short period, this outrageously bold idea looked like it was going to be the beginning of something very big and important. It was not. Within months, the original mission was pushed aside in favor of addressing more immediate needs in France, where, after all, the center was based. Within less than six months, the "world challenge" was replaced with "Frances need" - installing a national fiber-optic system.
Timing
The 1981 Paris initiative was way ahead of its time. Even if it had not unraveled for other reasons, it would have failed because of the absence of global telecommunications and the rarity of personal computers. The IBM PC had not even been introduced in Europe. Today, the timing is right. Two major forces fuel this timeliness: worldwide awareness and use of the Internet and the spread of personal computers into the lives of children - at school and at home. Because of these forces, a group of us has created a nonprofit organization called 2B1, whose purpose is to bring the digital world to kids in those places least likely to provide access to it. The idea is not to go country by country, but to target the world as a whole. Sounds cuckoo, but it isn't, because the Net itself and the children using it now are very much part of the solution. In parallel, the MIT Media Lab is also focusing on children, learning, and human development. The scientific and technical questions it faces range from language translation to storytelling to cultural understanding to the roles of nonverbal language.
Developing digerati
On July 17, MIT and 2B1 are cohosting a five-day workshop that will bring together people who have taken bold initiatives in bringing computers to children who live in technologically isolated
http://www.media.mit.edu/~nicholas/Wired/WIRED5-06.html (2 of 3) [28-4-2001 14:11:43]
places. For example, teachers who have defied the logic that you need to provide more chalk before you bring a computer into a primary classroom. Or social activists who have brought computers to street children who don't have schools at all. But especially those who have found ways even more imaginative to bring children into cyberspace. Check out www.2b1.org/. We will pay travel, room, and board expenses for as many people as we can afford, with a strong priority given to getting at least one or two individuals from every developing nation. Do you know somebody who should attend? Our goals for the meeting include developing a 2B1 plan of action, collaborating with existing groups, and establishing a major granting program of hardware, telecommunications systems, and know-how. Feels big? You bet it does. But just like the distributed Internet, this too can grow. In fact, the Net is the encouraging force. It is both global and popular - and what we did not have in 1981. 2B1 is a nonprofit foundation, whose president is Peter Cawley (peter@2b1.org), vice chair and chief scientist is Seymour Papert (seymour@2b1.org), and director of product development and interface design is Dimitri Negroponte (dimitri@2b1.org). Other participants include myself, Saj Nicole Joni, Tom Grant, Rodrigo Arboleda Halaby, and others mentioned at the Web site. Next Issue: Digital Obesity [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.06 June 1997.]
NEGROPONTE
Digital Obesity
Recently, I've been forced to look for new hardware and software and have since been suffering the indignity of updating myself. I cannot believe that manufacturers have gone so wildly astray while I wasn't looking - complexity is out of control. I have spent much of my time in front of a keyboard and display in the past 30 years. People have joked about my dependence on email since 1970, and older flight attendants remember seeing me using a laptop since 1979. In fact, I don't know anyone more wired than me in his or her daily life. This is my way of saying I'm no piker. But computers can be like ski boots. Old-timers are prone to keep their well-worn and comfortable equipment. Upgrading to the newest boot styles each year would raise hell with one's feet. Likewise, I am old-fashioned in my digital ways. I don't even use an email program but ride bareback on Unix instead. But inevitably, there comes a time when those favorite, laced leather boots need to be exchanged for a new pair. That time arrived in early 1997, and the new, modern digital headaches I discovered still haven't subsided. Mind you, I'm lucky. I have the full and generous support of some of MIT's finest technical staff at my disposal. I wonder who the rest of society turns to.
Overweight software
The problem displays itself as featuritis and bloated software systems. I am fond of quipping about how every time Andy makes a faster processor, Bill uses more of it. Turns out it's not so funny. Have you looked at the size and complexity of Microsoft Word recently? Outrageous. And each successive version has gotten worse. It's to the point where most programs are almost unusable and run slower than what I used a decade ago. What is wrong with you Redmond folk? Maybe you'll learn something about ease of use from your recent purchase: WebTV.
My adult and professional life has been spent trying to make computers easier to use, starting as far back as 1965. In those early days, people thought only sissies needed graphics. In 1972, when we devoted 256K to storing images, most people wrote it off as just another indecency and MIT arrogance. Why would anybody in their right mind commit so much memory to the icing, not the cake? Three decades later, we find a generation of kids who count memory not in Ks, but in Ms (and soon Gs). This is actually quite wonderful, but look at what we are using it for. The interface hasn't fundamentally changed since the introduction of the Macintosh more than a decade ago. It's just harder to use and obscenely obese. Someone needs a wake-up call. As a longtime devotee of Apple computers with a dozen active Macs currently in my life, I find myself extremely frustrated with the latest models. The little computer that greets you with a smiling face on start-up has become so complex that a Mac is now no simpler to use than a Wintel machine. So, like many, I decided it was time to switch platforms. I made my first foray into Windows two months ago and was so appalled that I raced back to the Macintosh like someone returning to a smelly bus after trying the newer subway system. I am amazed that so many people use Windows 95 without complaint. I guess there is a grin-and-bear-it attitude because THERE IS NOTHING ELSE. Yes, I am yelling.
accessible. But what is there to do about it - other than bitch? Is it time for a strike or a users- cartel? You bet it is. Whoever is guiding those young folks making the operating system and applications of tomorrow should put his or her foot down. It is time to lose weight. Stop making software that options you to death and start delivering simple, easy-to-use apps. The stuff you write is written by geeks, for geeks; why not try writing something for the rest of the world? An interim solution or holding pattern might be to eschew those beastly apps and recommend beginners to the Internet - through an online system like AT&T WorldNet. But when I went to install it myself, the instructions' first words, printed right on the CD-ROM, were: "Turn off the virus-protection software using the extensions manager." What the hell does that mean to Mom and Dad? Then, perhaps out of spite, the installer crashed my system. Next Issue: RFHelps Marriage [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.07, July 1997.]
NEGROPONTE
Wireless Revisited
In the early days of cellular telephones, service providers touted "anything, anywhere, anytime," which I thought was real stupid. I suggested that a better jingle would be "nothing, nowhere, never - unless," where the "unless" clause was the added value of the wireless transmission, the real service being offered (see "Prime Time Is My Time," Wired 2.08, page 134). Since those days, wireless service has grown in all sorts of places, for all sorts of reasons. For example, wireless phone systems are widely installed in developing nations because of their rapid deployment and low cost, even if there is no real need for mobility. Or, look at some developed nations, where people will carry cell phones purely for reasons of security, hardly the original purpose. Like everybody else, I have a cell phone but never leave it on. In large part, this is because I don't want to be disturbed. It is also because I don't "do phones." I find email far more effective, and, as a result, I use telephony mostly for data, mostly from fixed landlines. Therefore, my dayto-day experience with radio frequency (RF) has been somewhat limited.
In the past, I would excuse myself from the dinner table, watching TV on the couch, or lazing around the house to go off and work at a keyboard. Being online meant not being a part of the household. But no one complains when you pick up a newspaper, magazine, or book while others are watching TV. Right? Now, I can do the same with the Net and the Web and be no more antisocial than if I were reading a magazine. Think about it. Sounds trivial, but it sure nullifies the complaint my wife has had for more than 20 years: she says that my back is all she usually sees. Not any more. This got me thinking: Was the Negroponte Switch correct after all?
Granularity
Many cell-phone users, believe it or not, think they are using a walkie-talkie-style communications system that is completely wireless - from one handset to another. In truth, most often there is a lot of wire in between. Typically, the wireless portion is only a fraction of the distance covered. For this reason, instead of the simplicity of the Negroponte Switch, think of the more complex public/private nature of the bits. Bits will travel wirelessly in proportion to the degree to which they're public. The bits that represent the Super Bowl, for example, are well justified for delivery by satellite TV. There really is no better way to get the same bits to 150 million Americans simultaneously. My phone or computer, however, merit less wireless distance. In the case of my newfound marriage assistant and spread-spectrum LAN, it need reach only across my home. In the case of my TV remote control, it need reach only across the room.
What this suggests is that wireless communication should be designed with the nature of the bits in mind. This issue is not wired versus wireless but the strength of the signal. It also means that you had better not sell short the landline phone company or makers of fiber optic cable. In the end, we have to remember that nature has provided us with only one radio spectrum, no matter how cleverly we choose to use it. In contrast, insofar as a single fiber is more or less equal to the whole RF spectrum, the bandwidth of fiber landlines is infinite, since we can keep on making more and more, running the factories three shifts a day, seven days a week. For this reason, the granularity of RF will get smaller and smaller, for more and more personal bits. A good example of small-grain RF is the scale and extent of a home wireless LAN. You'll like the freedom it affords, and it might even help your marriage. Next Issue: Redisintermediation [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.08, August 1997.]
NEGROPONTE
Reintermediated
The new story of disintermediation is an old bits-and-atoms classic. The complex process of "things" has created a food chain of middlemen and wholesalers who import, export, warehouse, and redistribute physical items. For this reason, when you buy tomatoes for US$1.57 per pound, the grower gets less than 35 cents, while the rest goes to all the people in the middle (in the case of tomatoes, up to seven intermediaries may be involved). If you could buy direct, it would be a no brainer to split the difference with the farmer, which would no doubt please the both of you. In fact, this is how online retailing started. Boutique winemakers north of San Francisco could not attract the attention of large wholesalers, nor were they satisfied with limited local distribution. Enter the cork dork. Brothers-in-law Robert Olson and Peter Granoff, who refer to themselves as "propellerhead" and "cork dork," created Virtual Vineyards (www.virtualvin.com/), one of the first Web sites to retail anything, let alone wine. In theory, they run a no-inventory business by arranging to dropship wine directly to your home, while collecting a nominal fee for arranging the sale and handling the billing. But, wait a second. Why do I even need them? Why couldn't each vineyard run its own Web page and just agree on simple terms (full body, tannic, fruity, et cetera) and conditions (blend of grapes, use of oak, price per bottle, et cetera), so that a computer program could do the work of Virtual Vineyards, thereby cutting it out as well? Well, winegrowers could. And someday they will, albeit none too soon.
the showroom. It is in effect a factory outlet. For this reason, it's not hard to imagine buying directly from the factory. Automobile manufacturers would embrace this strategy aggressively, if it did not risk annoying the prime retail channel in the short term. Car salespeople are comforted by this reality, but they also know their days are numbered especially the young dealers, who won't be dead before it happens. They may be rude, but they're not dumb. They need to adopt a better attitude, become more pleasant, and focus on aftersales. The latter can be as silly as a birthday card or as serious as a warrantied house call. Therein lies the secret: as you are about to be disintermediated, reintermediate yourself by adding a new dimension of value. Typically, this is a service with some flavor of added personalization.
Reintermediated publishing
The people who really ought to be disintermediated are publishers. Here I draw a distinction between magazines (of course) and books: the former sells context, and the latter sells content. The content side of the equation can and will go direct the fastest. Since books are physical things distributed largely through thousands of retail outlets that buy one or two copies at a time, you and I would have trouble distributing as well as Knopf. Otherwise, we really can do without them. But tilt. People will say, "I bought your book because Knopf published it." Knopf was the talent scout, the finishing school, the company whose judgment is trusted. Well, rubbish to that. Think
http://www.media.mit.edu/~nicholas/Wired/WIRED5-09.html (2 of 3) [28-4-2001 14:11:49]
of the last three books you've read. Do you remember the publisher? You know the author and the title, as well as the book's color, shape, and thickness. But you're unlikely to recall which company published it. Whether you read Grisham or Goethe, you read the author, not the publisher. That's why traditional book publishers will slowly but inevitably disappear. Bookstores will vanish even sooner, as they bring almost no value over a Web site like Amazon.com. So who will remain? The answer is a new intermediary. One who - or that - tells you which books you are most likely to enjoy. Think of it this way. How many hours have you wasted on a book that was just not worth your time? I feel about reading a book the same way I feel about waiting for a bus. Having already invested time doing so, I feel I might as well amortize that time by spending a bit more, and a bit more, until the bus comes - no matter how late. The digital intermediaries may change that forever. I want them to. So do you. Next Issue: On Digital Growth and Form [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.09, September 1997.]
NEGROPONTE
Being digital has three physiological effects on the shape of our world. It decentralizes, it flattens, and it makes things bigger and smaller at the same time. Because bits have no size, shape, or color, we tend not to consider them in any morphological sense. But just as elevators have changed the shape of buildings and cars have changed the shape of cities, bits will change the shape of organizations, be they companies, nations, or social structures. We understand, for example, that doubling the length of a fish multiplies its weight no less than eight times. We know that suspension cables break after a certain length because they cannot support their own weight. We are almost clueless, however, about the fractal nature of the digital world and how it will change the shape of our environment. Yet the effect will be no less substantial than if we changed the force of gravity.
Yes, many pieces of our world - work and play - do have a centralism to them. Hierarchy has its place. But even the most conservative centralist will agree that organizations have flattened, with considerably fewer levels between top and bottom. Mitsubishi Trading Company, for example, summarily removed an entire level of middle managers, and other firms are doing the same. In part this is due to a competitive market economy that demands streamlining. But in greater part it is because modern communications allow people to deal with more than seven others (plus or minus one). Add current-day management doctrine and you get even thinner social forms. Leaders distinguish themselves by what they do, not by where they sit - something which many politicians and industrialists have yet to note. The computer industry learned this with open systems, where competing with imagination proves far more profitable than doing so with locks and keys. A libertarian view of the world adds flatness to decentralism and concludes that large organizations, like the nation-state, are doomed. This is only half true. Instead I would liken the digital world to indigenous architecture, where local and global forces make for individualism and harmony at the same time. Each house on a Greek island is totally its own design, reflecting the ad hoc needs of various individuals over time. But common use of local materials building in stone and applying whitewash to reflect the heat - results in a collective order. As soon as you use steel and air-conditioning, however, the only way to protect that harmony is to legislate it, relying on zoning laws to do what nature did before.
physical space and the ability to lose lots of money in order to make a lot more. The value of being small needs no explanation. At this point in history, it is hard to imagine that our highly structured and centralist world will morph into a planetful of loosely connected physical and digital communities. But it will. For this reason, more and more attention needs to be paid to just how and how well we can coordinate this new mass individualization. It is, for example, easy to see who will build the road in my village. It is considerably harder to see who will connect our villages, especially if some have less wealth or control than others. It is also hard to see how we will agree on various standards. Think of it - we live in a world where we cannot even agree on which side of the road to drive. Next Issue: New Standards for Standards [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to Media Lab Home Page]
[Previous | Next] [Copyright 1997, WIRED Ventures Ltd. All Rights Reserved. Issue 5.10, October 1997.]
EGROPONTE
Being Anonymous T
he digital world is about personalization. While atoms are prone to be repeated as industrial-age artifacts of assembly-line manufacturing, bits can be easily changed to deliver a customized product. And the new age of individualization brings with it all kinds of personalized belongings. In the most trivial sense, it means more things like vanity license plates and monogrammed shirts - your logo, not Ralph Lauren's. In a deeper sense, personalization provides comfort, security, and self-esteem. It is the means by which we are understood and express ourselves as individuals. The benefits of being unique can be as mundane as getting greeted by name or as magical as ordering a full meal with nothing more than a nod. It can be as complex as a long friendship that allows another person to understand, for example, the difference between what you mean and what you say. You know they know and they know you know they know. That is personalization. Acquaintance is the tool humans use to draw inferences, to unravel ambiguities and fill in missing information. Knowing a person makes communication much easier. But if we are not careful, that knowledge will leak into unwanted places, and we will pay the price in lost privacy. And by privacy I mean not just a theoretical and God-given right, but an everyday need and convenience.
Privacy pirates
Humans like to delegate. (You cannot do everything yourself anyway.) And because modern society increasingly engages other people in our personal affairs, we knowingly and unknowingly trade off the risk of betrayal for the value of personal attention. In the case of some services, like the practice of law and medicine, the potential hazards of revealing facts about yourself are reduced by legal or ethical practices. By contrast, a highhttp://www.media.mit.edu/~nicholas/Wired/WIRED6-10.html (1 of 3) [28-4-2001 14:12:03]
society butler or upstairs maid is not bound by a professional code and is often the star witness in domestic disputes. Nobody knows you better than the person who has been serving your idiosyncrasies, filtering your information needs, running your bank accounts, or making your bed. Most of us don't mind the risk. The quality of life is so greatly enhanced by personalized service that we are willing to freely reveal a great deal about ourselves to many other people. It is important to note that several parties are usually involved, even in our inner circle of friends and assistants. Fortunately, no one person has a complete model of us, and it is hard for them to share the parts. I will even entrust a machine with much of the same personal information. This information, however, is much more easily shared among other computer and human agents. In fact, far too much of the information about me - my "digital self" - is not coming from me directly. It is being culled without my knowledge and used for things that have no direct benefit to me. It is being pirated for purely commercial purposes, turning my personal data from an asset into a liability. Junk email and telemarketing solicitations are increasingly frequent examples of what result from this hijacked and repurposed information - of how good can change so quickly to bad. Because digital buccaneers gather their information surreptitiously, all too much is wrongly inferred and not fact. If my credit card shows lots of charges at Japanese restaurants, it may mean I like sushi, or it may mean I have Tokyo-based business associates but hate Japanese food. American Express will never know which. I would be happy to tell them, of course, if there were any value to me in my doing so. In the meantime, I'll pass whenever I can on becoming a data sample every time I visit a Web site, thank you very much.
Being nameless
My wife and I keep a home in France. With the exception of the driver for Federal Express, nobody knows us, or even our name. The luxury of anonymity is just as extraordinary as the opposite extreme we enjoy elsewhere. (Keeping it, of course, is an art form.) And anonymity has lots of small benefits, especially when it comes to peace and quiet. In a physical place, unfortunately, you cannot have it both ways. In cyberspace you can. A lot is written about digital identity, particularly about using the Net to role-play, to pretend you are somebody other than who you are. Almost nothing is written about the value of being nobody - not somebody different, but nobody in particular. The power of digital anonymity first struck me watching an electronic community for people worried that their spouse might have Alzheimer's. Because of the anonymity afforded by the chat room, people were willing to ask questions they would never have addressed under other conditions - and to become part of the community.
http://www.media.mit.edu/~nicholas/Wired/WIRED6-10.html (2 of 3) [28-4-2001 14:12:03]
A less moving example of the value of being nameless is ecommerce. How many times have you arrived at a site and not purchased something because you were asked to fill out a detailed questionnaire? Independent of worrying whether Ken Starr might subpoena your book-buying records, you don't respond because you just don't want to hear back from everybody who sells you something. When Amazon.com emailed some advertising after my first purchase, I asked that they stop - they have been terrific about honoring the silence I requested. This type of digitally responsible company deserves to be successful.
Anonymous payments
Sadly, not all merchants will be as respectful of your privacy, and there's no accepted way of making a payment without revealing your identity. Even smartcards have to reveal their identity in order to be secure. The conventional wisdom in the payments field places little value on anonymity: "Privacy," I repeatedly hear, is the fetish of ponytailed paranoids who have something to hide. Wrong. Digital privacy is a simple, practical matter, a necessary step so we can get on with ecommerce without creating an avalanche of unsolicited interruptions. The digital world is already too noisy I want anonymity for reasons of tranquility, not dishonesty. If done right, digital money is far better than cash. Beyond ease of payment, it could allow governments to eliminate money laundering, and let parents give children an allowance that can't be spent buying Penthouse. Furthermore, anonymous payment systems need not be symmetrical, as the physical world demands. You can pay anonymously, but retain the option to change your mind should you later need to prove that you paid. Still, on far more occasions than you can imagine today, you will want no identity in transactions. You will want to be nobody. Next: Pricing Our Future [Back to the Index of WIRED Articles | Back to Nicholas Negroponte's Home Page | Back to the Media Lab Home Page]
[Previous | Next] [Copyright 1998, WIRED Ventures Ltd. All Rights Reserved. Issue 6.10, October 1998.]