You are on page 1of 20

The best way to understand any current innovation is to study the initial invention.

Lets
look at three of the biggest before the dawn of TV, film and the Internet. The telegraph:
where print meets speed. The radio: where sound meets speed. And the photograph:
where an image is captured in time.

History of Telegraph
The telegraph created the character of our current information age. It broke the
connection between communication and transportation. Prior to the telegraph, the
speed of communication was the speed of a train--about 35 miles per hour! The
telegraph dropped it to just a few seconds: point A to point B in an electronic pulse! With
it a new experience was unfolding: information was no longer just local and rooted in
immediate context. Instead, ideas and news were now presented in what Shane Hipps
called in his book The Hidden Power of Electronic Culture a mosaic of unrelated data
points with not apparent connection.
Information ceased to be altruistic in nature. It became a hot and immediate commodity,
something that could be bought or sold for profit. It began to whisper a new subliminal
message that truth itself was in the words of Neil Postman idiosyncratic and that
history is irrelevant with no basis for valuing one thing over another. Under this
influence it didnt take long for Neitzsche to adopt a nihilistic worldview and declare the
death of God!

Impact of the Telegraph and its Baby Brother, the Internet


As our thinking patterns continue to mirror this communications pattern we see that the
telegraph, and now its younger brother, the internet, have created a new way of
thinking: the denial of any real sense of a metanarrative or overarching story or truth
that organizes and makes sense of the world. Its just a world wide web of random
information, right? Isnt this what we experience when we scroll through our news
feeds on Facebook?
Though not easily discerned, I submit to you this aspect of post-modernity poses the
greatest challenge to the claims of Christ and the clear metanarrative we find in
Scripture.

History of Radio Communication


Radio returned us to the tribal campfire where the spoken word and corporate
experiences rule. Just as in tribal cultures, the radio allowed us to share songs, stories,
news together at the same time, yet far beyond the warmth of a local campfire. Orson
Welles famous 1938 radio broadcast of the story The War of the Worlds proved this
point in a very real and frightening way. This was the first electronic implosion or
reversal back to experiencing in some measure a prior form of media: the age of the
orator. And although we remained a culture still largely dependent on literacy, radio
created a hybrid consumer in that we could be described in radios heyday as a tribe of
ones. Radio snapped us back to communal methods of learning that were experiential,
oral, and corporaterather than rational, visual, and private.

History of Photographic Communication


During the 19th century we witnessed an amazing convergence of three media
technologies when the photography converged with the printing press and the
telegraph, allowing images and icons to be produced on a mass scale and sent
everywhere at once. In many ways the graphic revolution returned us to the iconic world
of the Middle Age. Over time this iconic symbol system began to dissolve our
dependence on literacy. Dont believe me?

Impact of Visual Imagery on Communication


Like it or not, we are experiencing a weakening in a preference for abstract and linear
thinking in favor of more image-based, concrete, holistic, non-linear thinking.
Regardless of what is being depicted in a photograph, the form of the medium itself
evokes in us a particular pattern that is exactly the opposite of the printed word.
As image-based communications becomes the dominant symbol system it changes our
default processing patterns, our preferred thoughts and our interpretation of whats
important. Even politicians know this. We saw this beginning with Kennedys youthful
dominance over an aging Nixon during their presidential debates in the early 1960s and
again with Barack Obamas appearance and visual communication skills over John
McCain in 2008.

Information Age
"Digital Age" redirects here. For the four-piece American band, see The Digital Age.
History of technology
The Information Age (also known as the Computer Age, Digital Age, or New Media Age)
is a period in human historycharacterized by the shift from traditional industry that the
Industrial Revolution brought through industrialization, to an economy based on
information computerization. The onset of the Information Age is associated with the
Digital Revolution, just as the Industrial Revolution marked the onset of the Industrial
Age.[1][2]
During the information age, the phenomenon is that the digital industry creates a
knowledge-based society surrounded by a high-tech global economy that spans over its
influence on how the manufacturing throughput and the service sector operate in an
efficient and convenient way. In a commercialized society, the information industry is
able to allow individuals to explore their personalized needs, therefore simplifying the
procedure of making decisions for transactions and significantly lowering costs for both
the producers and buyers. This is accepted overwhelmingly by participants throughout
the entire economic activities for efficacypurposes, and new economic incentives would
then be indigenously encouraged, such as the knowledge economy.[3]
The Information Age formed by capitalizing on computer microminiaturization advances.
[4] This evolution of technology in daily life and social organization has led to the fact
that the modernization of information and communication processes has become the
driving force of social evolution.[2]
This section may need to be rewritten entirely to comply with Wikipedia's quality
standards. You can help. The discussion pagemay contain suggestions. (December
2013)
The Internet was conceived as a fail-proof network that could connect computers
together and be resistant to any single point of failure. It is said that the Internet cannot
be totally destroyed in one event, and if large areas are disabled, the information is
easily rerouted. It was created mainly by DARPA on work carried out by British scientists
like Donald Davies; its initial software applications were e-mail and computer file
transfer.
Though the Internet itself has existed since 1969, it was with the invention of the World
Wide Web in 1989 by British scientist Tim Berners-Lee and its introduction in 1991 that
the Internet became an easily accessible network. The Internet is now a global platform

for accelerating the flow of information and is pushing many, if not most, older forms of
media into obsolescence.
Progression[edit]
Library expansion[edit]
Library expansion was calculated in 1945 by Fremont Rider to double in capacity every
16 years, if sufficient space were made available.[5] He advocated replacing bulky,
decaying printed works with miniaturized microform analog photographs, which could be
duplicated on-demand for library patrons or other institutions. He did not foresee the
digital technology that would follow decades later to replace analog microform with
digital imaging, storage, and transmission media. Automated, potentially lossless digital
technologies allowed vast increases in the rapidity of information growth. Moore's law,
which was formulated around 1965, calculated that the number of transistors in a dense
integrated circuit doubles approximately every two years.[6]
The proliferation of the smaller and less expensive personal computers and
improvements in computing power by the early 1980s resulted in a sudden access to
and ability to share and store information for increasing numbers of workers.
Connectivity between computers within companies led to the ability of workers at
different levels to access greater amounts of information.
Information storage[edit]
The world's technological capacity to store information grew from 2.6 (optimally
compressed) exabytes in 1986 to 15.8 in 1993, over 54.5 in 2000, and to 295 (optimally
compressed) exabytes in 2007. This is the informational equivalent to less than one
730-MB CD-ROM per person in 1986 (539 MB per person), roughly 4 CD-ROM per
person of 1993, 12 CD-ROM per person in the year 2000, and almost 61 CD-ROM per
person in 2007.[7] It is estimated that the world's capacity to store information has
reached 5 zettabytes in 2014.[8] This is the informational equivalent of 4,500 stacks of
printed books from the earth to the sun.
Information transmission[edit]
The world's technological capacity to receive information through one-way broadcast
networks was 432 exabytes of (optimally compressed) information in 1986, 715
(optimally compressed) exabytes in 1993, 1.2 (optimally compressed) zettabytes in
2000, and 1.9 zettabytes in 2007 (this is the information equivalent of 174 newspapers
per person per day).[7] The world's effective capacity to exchange information through
two-way telecommunication networks was 281 petabytes of (optimally compressed)
information in 1986, 471 petabytes in 1993, 2.2 (optimally compressed) exabytes in

2000, and 65 (optimally compressed) exabytes in 2007 (this is the information


equivalent of 6 newspapers per person per day).[7] In the 1990s, the spread of the
Internet caused a sudden leap in access to and ability to share information in
businesses and homes globally. Technology was developing so quickly that a computer
costing $3000 in 1997 would cost $2000 two years later and $1000 the following year.
Computation[edit]
The world's technological capacity to compute information with humanly guided generalpurpose computers grew from 3.0 108 MIPS in 1986, to 4.4 109 MIPS in 1993, 2.9
1011 MIPS in 2000 to 6.4 1012 MIPS in 2007.[7] An article in the recognized Journal
Trends in Ecology and Evolution reports that by now digital technology "has vastly
exceeded the cognitive capacity of any single human being and has done so a decade
earlier than predicted. In terms of capacity, there are two measures of importance: the
number of operations a system can perform and the amount of information that can be
stored. The number of synaptic operations per second in a human brain has been
estimated to lie between 10^15 and 10^17. While this number is impressive, even in
2007 humanity's general-purpose computers were capable of performing well over
10^18 instructions per second. Estimates suggest that the storage capacity of an
individual human brain is about 10^12 bytes. On a per capita basis, this is matched by
current digital storage (5x10^21 bytes per 7.2x10^9 people)".[8]
Relation to economics[edit]
Eventually, Information and Communication Technologycomputers, computerized
machinery, fiber optics, communication satellites, Internet, and other ICT toolsbecame
a significant part of the economy. Microcomputers were developed and many
businesses and industries were greatly changed by ICT.[citation needed]
Nicholas Negroponte captured the essence of these changes in his 1995 book, Being
Digital.[9] His book discusses similarities and differences between products made of
atoms and products made of bits. In essence, a copy of a product made of bits can be
made cheaply and quickly, and shipped across the country or internationally quickly and
at very low cost.
Impact on jobs and income distribution[edit]
This section needs additional citations for verification. Please help improve this
article by adding citations to reliable sources. Unsourced material may be challenged
and removed. (October 2012) (Learn how and when to remove this template message)

The Information Age has affected the workforce in several ways. It has created a
situation in which workers who perform tasks which are easily automated are being
forced to find work which involves tasks that are not easily automated.[10] Workers are
also being forced to compete in a global job market. Lastly, workers are being replaced
by computers that can do their jobs faster and more effectively. This poses problems for
workers in industrial societies, which are still to be solved. However, solutions that
involve lowering the working time are usually highly resisted.[citation needed]
Jobs traditionally associated with the middle class (assembly line workers, data
processors, foremen and supervisors) are beginning to disappear, either through
outsourcing or automation. Individuals who lose their jobs must either move up, joining
a group of "mind workers" (engineers, doctors, attorneys, teachers, scientists,
professors, executives, journalists, consultants), or settle for low-skill, low-wage service
jobs.
The "mind workers" are able to compete successfully in the world market and receive
high wages. Conversely, production workers and service workers in industrialized
nations are unable to compete with workers in developing countries and either lose their
jobs through outsourcing or are forced to accept wage cuts.[11] In addition, the internet
makes it possible for workers in developing countries to provide in-person services and
compete directly with their counterparts in other nations.
This has had several major consequences, including increased opportunity in
developing countries and the globalization of the workforce.
Workers in developing countries have a competitive advantage which translates into
increased opportunities and higher wages.[12] The full impact on the workforce in
developing countries is complex and has downsides. (see discussion in section on
Globalization).
In the past, the economic fate of workers was tied to the fate of national economies. For
example, workers in the United States were once well paid in comparison to the workers
in other countries. With the advent of the Information Age and improvements in
communication, this is no longer the case. Because workers are forced to compete in a
global job market, wages are less dependent on the success or failure of individual
economies.[11]
Automation, productivity, and job loss[edit]
The Information Age has affected the workforce in that automation and computerization
have resulted in higher productivity coupled with net job loss. In the United States for
example, from January 1972 to August 2010, the number of people employed in

manufacturing jobs fell from 17,500,000 to 11,500,000 while manufacturing value rose
270%.[13]
Although it initially appeared that job loss in the industrial sector might be partially offset
by the rapid growth of jobs in the IT sector, the recession of March 2001 foreshadowed
a sharp drop in the number of jobs in the IT sector. This pattern of decrease in jobs
continued until 2003.[14]
Data has shown that overall, technology creates more jobs than it destroys even in the
short run.[15]
Rise of information-intensive industry[edit]
Industry is becoming more information-intensive and less labor and capital-intensive
(see Information industry). This trend has important implications for the workforce;
workers are becoming increasingly productive as the value of their labor decreases.
However, there are also important implications for capitalism itself; not only is the value
of labor decreased, the value of capital is also diminished. In the classical model,
investments in human capital and financial capital are important predictors of the
performance of a new venture.[16] However, as demonstrated by Mark Zuckerberg and
Facebook, it now seems possible for a group of relatively inexperienced people with
limited capital to succeed on a large scale.[17]
Innovations[edit]
The Information Age was enabled by technology developed in the Digital Revolution,
which was itself enabled by building on the developments in the Technological
Revolution.
Computers[edit]
Main article: History of computers
Before the advent of electronics, mechanical computers, like the Analytical Engine in
1837, were designed to provide routine mathematical calculation and simple decisionmaking capabilities. Military needs during World War II drove development of the first
electronic computers, based on vacuum tubes, including the Z3, the AtanasoffBerry
Computer, Colossus computer, and ENIAC.
The invention of the transistor in 1947 enabled the era of mainframe computers (1950s
1970s), typified by the IBM 360. These large room-sized computers provided data
calculation and manipulation that was much faster than humanly possible, but were
expensive to buy and maintain, so were initially limited to a few scientific institutions,
large corporations and government agencies. As transistor technology rapidly improved,

the ratio of computing power to size increased dramatically, giving direct access to
computers to ever smaller groups of people.
Along with electronic arcade machines and home video game consoles in the 1980s,
the development of the personal computers like the Commodore PET and Apple II (both
in 1977) gave individuals access to the computer. But data sharing between individual
computers was either non-existent or largely manual, at first using punched cards and
magnetic tape, and later floppy disks.
Data[edit]
Main article: History of telecommunications
The first developments for storing data were initially based on photographs, starting with
microphotography in 1851 and then microform in the 1920s, with the ability to store
documents on film, making them much more compact. In the 1970s, electronic paper
allowed digital information appear as paper documents.
Early information theory and Hamming codes were developed about 1950, but awaited
technical innovations in data transmission and storage to be put to full use. While cables
transmitting digital data connected computer terminals and peripherals to mainframes
were common, and special message-sharing systems leading to email were first
developed in the 1960s, independent computer-to-computer networking began with
ARPANET in 1969. This expanded to become the Internet (coined in 1974), and then
the World Wide Web in 1989.
Public digital data transmission first utilized existing phone lines using dial-up, starting in
the 1950s, and this was the mainstay of the Internet until broadband in the 2000s. The
introduction of wireless networking in the 1990s combined with the proliferation of
communications satellites in the 2000s allowed for public digital transmission without
the need for cables. This technology led to digital television, GPS, and satellite radio
through the 1990s and 2000s.
Computers continued to become smaller and more powerful, to the point where they
could be carried. In the 1980s and 1990s, laptops first allowed computers to become
portable, and PDAs allowed use while standing or walking. Pagers existing since the
1950s, were largely replaced by mobile phones beginning in the 1990s, giving mobile
networking ability to a few at first. These have now become commonplace, and include
digital cameras. Starting in the late 1990s, tablets and then smartphones combined and
extended these abilities of computing, mobility, and information sharing.

Art In The Age of Electronic Media Timeline


1951: Invention of Video Recording
In 1951, the first video tape recorder (VTR) captured live images from television
cameras by converting the information into electrical impulses and saving the
information onto magnetic tape
In the early days, film was the only medium available for recording television
programmes
Charles Ginsburg led the research team at Ampex Corporation in developing the first
practical videotape recorder (VTR)
1960: First Presidential Televised Debate
Democratic Senator John F. Kennedy and Republican Vice President Richard M. Nixon
face each other in a nationally televised presidential campaign debate
A new era in which crafting a public image and taking advantage of media exposure
became essential ingredients of a successful political campaign
The use of using the TV as a source of accessing information in a fast pace way than
reading the newspaper
1973: First Cell Phone
Dr Martin Cooper, a former general manager for the systems division at Motorola, is
considered the inventor of the first portable handset and the first person to make a call
on a portable cell phone in April 1973
On April 3, 1973, standing on a street near the Manhattan Hilton, Mr. Cooper decided to
attempt a private call before going to a press conference upstairs in the hotel
A new form of easy communication in the technology world - communication is key to
the art world too
Without it, the artist couldn't get hold of gigs, collaborations, events, etc.
1978: Brian Eno
An English musician, composer, record producer, singer, and visual artist, known as one
of the principal innovators of ambient music
First ambient record, Ambient 1: Music for Airports
He was interested in making an album that could be played continuously at an airport to build a kind of surrounding and mood that would bring together all of the different
feelings of being at an airport and turn it into something soothing
1970: Bruce Nauman
The most innovative and provocative of Americas contemporary artists
Using closed circuit cameras in his installations with himself as the subject , specifically
during a series called Performance Corridor

The pieces are characterized by repetitive actions which prolong for multiple hours
"Walking in an Exaggerated Manner Around the Perimeter of a Square" - Nauman silent
art piece of him moving around a square
2007 & 2008: "Terra Nova: Sinfonia Antarctica"
Born named, Paul Miller, DJ Spooky is a Washington DC-born electronic and
experimental hip hop musician whose work is often called by critics or his fans as
"illbient" or "trip hop"
A large scale multimedia performance work will be an acoustic portrait of a rapidly
changing continent
Millers field recordings from a portable studio, set up to capture the acoustic qualities of
Antarctic ice forms, reflect a changing and even vanishing environment under duress
1982: "O Superman"
Composed by Laurie Anderson, an American experimental performance artist,
composer and musician who plays violin and keyboards and sings in a variety of
experimental music and art rock styles
This song addresses issues of technology and communication, in particular planes and
arms
Anderson based this song on an aria from the 1885 opera "El Cid" by Jules Massenet.
She got the idea after listening to a recording of the aria made by Charles Hilland, an
African-American tenor
1991: Troorkh
A composition for trombone and orchestra by Iannis Xenakis
Xenakis treats the orchestra in a dense and opaque way, turning it into only one
instruments and not treating it as a combination of instruments
The composition starts with a tritone played by the winds. Then, the strings join in
playing clusters of music which are further developed afterwards
2000
1992: Improvement to the Internet
Opened up an email connection in July 1992 and full Internet service in November 1992
Microsoft's full scale entry into the browser, server, and Internet Service Provider market
completed the major shift over to a commercially based Internet
During this period of enormous growth, businesses entering the Internet arena
scrambled to find economic models that work. Free services supported by advertising
shifted some of the direct costs away from the consumer--temporarily
2006: "Deep Wounds"
A large-scale, interactive installation that uses the campus's historic Memorial Hall to
explore unfinished healing and reconciliation
Deep Wounds
projects a luminous light on the white-marble floor of the hall. Looking closely, visitors
see hints of inscribed text
Created by Brian Knep, a media artist whose works range from large-scale interactive

installations to microscopic sculptures for nematodes


http://inventors.about.com/library/inventors/blvideo.htm
http://www.history.com/this-day-in-history/kennedy-and-nixon-square-off-in-a-televisedpresidential-debate
http://www.history.com/topics/us-presidents/kennedy-nixon-debates
http://www.engology.com/eng5cooper.htm
1952: 4' 33''
John Cage (the pioneer of electro-acoustic music), performs 4' 33''
4'33' was composed without having any notes or instruments played
Made up of the natural environment that his audience was a part of
Example of introducing the world of sound art
Video is a rendition of John Cage's 4'33"
http://www.moma.org/visit/calendar/exhibitions/1421
Fluxus was a loosely organized group of artists that spanned the globe, but had an
especially strong presence in New York City. George Maciunas is historically considered
the primary founder and organizer of the movement
The persistent goal of most Fluxus artists was to destroy any boundary between art and
life
Fluxus art involved the viewer, relying on the element of chance to shape the ultimate
outcome of the piece
http://www.theartstory.org/movement-fluxus.htm
1960's: The Flux Movement
1963: Nam June Paik
Founder of Video Art - Introduce the world of video art in the art realm
Participation TV , where participants can manipulate the broadcast image through two
microphones, or the sound of their voice. This presentation emphasized the live and
instantaneous features of video , making it an excellent tool for interactive installations
Early usage (1974) of the term "electronic super highway" in application to
telecommunications
http://www.washingtonpost.com/blogs/going-out-guide/post/father-of-video-art-namjune-paik-gets-american-art-museum-exhibit-photos/2012/12/12/c16fa980-448b-11e28e70-e1993528222d_blog.html
http://www.pbs.org/art21/artists/bruce-nauman
http://www.songfacts.com/detail.php?id=11376
Part 1
Part 2
2005: YouTube First Launched
The world's most popular online video site, with users watching 4 billion hours worth of
video each month, and uploading 72 hours worth of video every minute
Founders are: Chad Hurley, Steve Chen, and Jawed Karim (former Paypal employees

It played an instrumental role from politics to media entertainment


Another source or tool to establish communications - Placing work into a "digital gallery
space"
2011: Music from a Dry Cleaner
Composed by sound designer and composer Diego Stocco
Creates eclectic compositions using custom built instruments, elements of nature and
experimental recording techniques
He used a puff iron, press and dry cleaning machines, a washer, clothes hangers, and a
bucket full of soap. The bass and lead sounds were created from the buzzing tones
coming from the conduits and engines

The Dawn of an Electronic Era


The computer age began when ENIAC (Electronic Numerical Integrator and Calculator) was
completed in 1945. The first multipurpose computer, ENIAC set speed records with an amazing
5,000 additions per second. Computers have come a long way sincea laptop today can do
500,000,000 additions per second.
Thats not the only difference. ENIAC weighed more than 30 tons, filled an 1,800-square-foot room
and included 6,000 manual switches. It used so much electricity that it sometimes caused power
shortages in its home city of Philadelphia. By contrast, a notebook PC today might weigh in at about
3 pounds.

Booting Up
You may know that booting your computer means starting it up. But did you know the word comes
from pulling yourself up by your bootstraps? Thats an expression that means taking charge of
yourself, which is what a computer seems to do when it starts up!

Bugging Out

The term bug has been used for problems in machinery since electricity
was invented. But the first computer bug was actually a moth! In 1945, a
computer being tested at Harvard University stalled when a moth got
caught inside. The engineers taped the moth into their computer log with
the note, First actual case of bug being found.

Computer Timeline

1945
The computer age begins with the debut of ENIAC (Electronic Numerical Integrator and
Calculator). It is the first multipurpose computer.
1975
The MITS Altair, a PC-building kit, hits stores
Bill Gates and Paul Allen establish Microsoft.
1976
Steven Jobs and Stephen Wozniak start Apple Computer.
1977
Apple Computer introduces the Apple II computer.
1978
Floppy disks replace older data cassettes.
1981
IBM introduces a complete desktop PC
1983
TIME magazine names the PC Man of the Year
1984

The user-friendly Apple Macintosh goes on sale


1985
Microsoft launches Windows.
1992
The Apple PowerBook and IBM ThinkPad debut
1996
Palm releases the PalmPilot, a hand-held computer also called a personal digital assistant.
1997
The term "weblog" is coined. It's later shortened to "blog."
1998
Google opens its first office, in California.
1999
College student Shawn Fanning invents Napster, a computer application that allows users to
swap music over the Internet.
"E-commerce" becomes the new buzzword as Internet shopping rapidly spreads.
MySpace.com is launched.
2000
To the chagrin of the Internet population, deviant computer programmers begin designing
and circulating viruses with greater frequency. "Love Bug" and "Stages" are two examples of
self-replicating viruses that send themselves to people listed in a computer user's email
address book.
America Online buys Time Warner for $16 billion. It's the biggest merger of all time.
2001
Wikipedia is created.
Apple introduces the iPod.

2003
Spam, unsolicited email, becomes a server-clogging menace. It accounts for about half of all
emails. In December, President Bush signs the Controlling the Assault of Non-Solicited
Pornography and Marketing Act of 2003 (CAN-SPAM Act), which is intended to help
individuals and businesses control the amount of unsolicited email they receive.
Apple Computer introduces Apple iTunes Music Store, which allows people to download
songs for 99 cents each.
2004
Mark Zuckerberg launches Thefacebook at Harvard. The site expands to other universities.
Google introduces gmail.
2005
YouTube.com is launched.
Youtube, a video-sharing website, goes live.
2006
There are more than 92 million websites online.
Twitter, a website for mini-blogging and social networking, debuts.
2007
Apple releases the iPhone in the United States. iPhone users can access social media sites
and apps through their phone. The smartphone becomes a portable, hand-held minicomputer.
2010
Apple introduces the iPad.
2011
Social networking websites such as Twitter and Facebook help activists organize an uprising
in Egypt. The trend of using social networking websites to organize protests and
demonstrations continues throughout 2011 in the Middle East and North Africa. Various

governments attempt to shut down social media and internet access to crackdown on protest
movements throughout 2011 to varying degrees of success.

A new report out from the Knight Commission on Information Needs of Communities in
a Democracy makes the case for emphasis on media literacy in the digital age.
Entitled Digital and Media Literacy: A Plan of Action, the report by Renee Hobbs focuses
on media literacy in the U.S., but some of its points struck me as potentially applicable
in other parts of the world as well. Hobbs isolates several digital and media literacy skills
that are necessary to take part in civic life in an information-saturated society (all of
these are taken directly from her report):

Make responsible choices and access information by locating and sharing


materials and comprehending information and ideas

Analyze messages in a variety of forms by identifying the author, purpose and


point of view, and evaluating the quality and credibility of the content

Create content in a variety of forms, making use of language, images, sound,


and new digital tools and technologies

Reflect on ones own conduct and communication behavior by applying social


responsibility and ethical principles

Take social action by working individually and collaboratively to share knowledge


and solve problems in the family, workplace and community, and by participating
as a member of a community.

According to Hobbs, these all constitute "core competencies of citizenship" in the digital
age. My (open) question is, do these also constitute core competencies of citizenship in
less developed countries where access to technology is not a given? Are there other,
equally crucial media literacy competencies in the developing world that are not
included here? With the exception perhaps of using "new digital tools and technologies,"
they do seem fairly applicable to me, but no doubt there are other particular
considerations that need to come into play in areas where certain types of technology
are plentiful (mobile phones) while others are not (computers with broadband). I do think
the phrase "make responsible choices" takes on a different meaning in the context of
authoritarian countries, for instance, so not all language easily translates across
borders. Still, it's an interesting starting point for discussion
WHATS AFTER THE DIGITAL AGE?

I wouldnt be surprise to see that the digital age is followed by some kind of Second
Dark Age. People who like to sing the praises of technology rarely take into account the
strain we are putting on an already overburdened universe. We live in a world of finite
resource. We only have so much oil, so much precious metals, so much space on this
tiny planet. I cant predict when this crass will occur, but I know its coming.
Digital age, information age..Whatever you want to call it, were in it; and it precedes
many that come before it. Stone age, ice age, bronze age, iron age. Because nobody
has any clue who is going to invent the next printing press or the next internet, our
guesses and speculations are we have to go off of. Im not saying the inevitable end to
this digital era is right around the corner but it will, eventually, be put into the category of
factory workers and stone tools. So whats gonna make that happen? What are your
thoughts?? My best guess is the Nano Agewhich would be awesome!!
I think weve only just now come through to the beginning of the beginning of the digital
age. By this I mean that the past 50 years or more comprise part of that beginning, and
probably in the next few hundred years will come the end of that so-called beginning
in other words, a microscopic blip of evolutionary time. Its impossible to say what is
going to occur because from our perspective things are accelerating so rapidly. Society
is widely accepting of these changes because we perceive them to be making our lives
easier among other things.
One thing about all this seems certain though, and that is that a Pandoras box of sorts
seems to have been opened upon the Earth, now that evolution seems to be asserting
itself through the sphere of human consciousness via the materialization of more
powerful technology and at a exponentially faster rate rather than archaic (and much
lengthier) means of biological adaptation. Who knows if this means whether humanity
has bought a ticket for the ride or not, every species and being ever to exist has been a
stronger or weaker link on the evolutionary chain since time and space came about.

Beyond the Information Age


WE LIVE IN the information age, which according to Wikipedia is a period in human
history characterized by the shift from industrial production to one based on information
and computerization.
Nothing surprising there, except for the idea that this is a period in human history
which tends to suggest it will come to an end at some point. The industrial revolution in
the late nineteenth century ushered in the industrial age, and the digital revolution in the

mid twentieth century spurred the emergence of the information age. So it is not entirely
crazy to speculate about what might lie beyond the information age.
Of course, I am not arguing that information will become obsolete. Firms will always
need to harness information in effective ways, just as most of them still need industrial
techniques to make their products cheaply and efficiently. My point, instead, is that
information will become necessary but not sufficient for firms to be successful. All this
talk of big data, for example, feels like an attempt to strain a few more drops of juice
out of an already-squeezed orange, just as Six Sigma was a way of squeezing more
value out of the quality revolution. Both are valuable concepts, but their benefits are
incremental, not revolutionary.
So just as night follows day, the information age will eventually be superseded by
another age; and it behooves those with senior executive responsibility to develop a
point of view on what that age might look like.
So here is a specific question that helps us develop this point of view one that was a
topic of debate at our annual Global Leadership Summit at London Business School,
focused this year on the rapid advance of technology and its impact on not only
business, but society, politics and the economy: What would a world with too much
information look like? And what problems would it create? I think there are at least four
answers:
1. Paralysis through Analysis. In a world of ubiquitous information, there is always more
out there. Information gathering is easy, and often quite enjoyable as well. My students
frequently complain that they need more information before coming to a view on a
difficult case-study decision. Many corporate decisions are delayed because of the need
for further analysis. Whether due to the complexity of the decision in front of them, or
because of the fear of not performing sufficient due diligence, the easy option facing any
executive is simply to request more information.
2. Easy access to data makes us intellectually lazy. Many firms have invested a lot of
money in big data and sophisticated data-crunching techniques. But a data-driven
approach to analysis has a couple of big flaws. First, the bigger the database, the easier
it is to find support for any hypothesis you choose to test. Second, big data makes us
lazy we allow rapid processing power to substitute for thinking and judgment. One
example: pharmaceutical companies fell in love with high throughput screening
techniques in the 1990s, as a way of testing out all possible molecular combinations to
match a target. It was a bust. Most have now moved back towards a more rational
model based around deep understanding, experience and intuition.
3. Impulsive and Flighty Consumers. Watch how your fellow commuters juggle their
smartphone, tablet and Kindle. Or marvel at your teenager doing his homework. With

multiple sources of stimulation available at our fingertips, the capacity to focus and
concentrate on a specific activity is falling. This has implications for how firms manage
their internal processes with much greater emphasis being placed on holding peoples
attention than before. It also has massive consequences for how firms manage their
consumer relationships, as the traditional sources of stickiness in those relationships
are being eroded.
4. A little learning is a dangerous thing. We are quick to access information that helps
us, but we often lack the ability to make sense of it, or to use it appropriately. Doctors
encounter this problem on a daily basis, as patients show up with (often incorrect) selfdiagnoses. Senior executives second-guess their subordinates because their corporate
IT system gives them line-of-sight down to detailed plant-level data. We also see this at
a societal level: people believe they have the right to information that is in the public
interest (think Wikileaks), but they are rarely capable of interpreting and using it in a
sensible way. The broader point here is that the democratization of information creates
an imbalance between the top and bottom of society, and most firms are not good at
coping with this shift.
Consequences
So what are the consequences of a business world with too much information? At an
individual level, we face two contrasting risks. One is that we become obsessed with
getting to the bottom of a problem, and we keep on digging, desperate to find the truth
but taking forever to do so. The other risk is that we become overwhelmed with the
amount of information out there and we give up: we realise we cannot actually master
the issue at hand, and we end up falling back on a pre-existing belief.
For firms, there are three important consequences. First, they have to become masters
of attention management making sure that people are focused on the right set of
issues, and not distracted by the dozens of equally-interesting issues that could be
discussed. A surplus of , as Nobel Laureate Herbert Simon noted, creates a deficit of
attention. That is the real scarce resource today.
Second, firms have to get the right balance between information and judgment in
making important decisions. As Jeff Bezos, founder and CEO of Amazon, observed,
there are two types of decisions: There are decisions that can be made by analysis.
These are the best kind of decisions. They are fact-based decisions that overrule the
hierarchy. Unfortunately theres this whole other set of decisions you cant boil down to a
math problem. One of the hallmarks of Amazons success, arguably, has been its
capacity to make the big calls based on judgement and intuition.
Finally, the ubiquity of information means a careful balance is needed when it comes to
sharing. Keeping everything secret isnt going to work anymore but pure

transparency has its risks as well. Firms have to become smarter at figuring out what
information to share with their employees, and what consumer information to keep track
of for their own benefits.
For the last forty years, firms have built their competitive positions on harnessing
information and knowledge more effectively than others. But with information now
ubiquitous and increasingly shared across firms, these traditional sources of advantage
are simply table-stakes. The most successful companies in the future will be smart
about scanning for information and accessing the knowledge of their employees, but
they will favour action over analysis, and they will harness the intuition and gut-feeling of
their employees in combination with rational analysis.
Julian Birkinshaw is Professor and Chair of Strategy and Entrepreneurship at the
London Business School.

You might also like