Professional Documents
Culture Documents
Inaugural
Lecture
Professor
Joanna
Climacus
Chair
in
Memetics
Program
in
the
History
of
Consciousness
University
of
California,
Santa
Cruz
September,
2007
I
Thirty
years
ago
Roland
Barthes
delivered
his
inaugural
address
at
the
College
de
France
after
being
named
to
a
self‐named
"Chair
in
Literary
Semiology."
On
that
occasion,
three
years
before
his
unfortunate
early
death,
Barthes
was
careful
to
be
non‐doctrinaire
about
the
relatively
new
method
to
which
his
appointment
gave
credence.
Wishing—according
to
Jonathan
Culler—to
discriminate
his
own
vision
of
semiology
from
that
which
was
rapidly
developing
in
the
wake
of
his
influence,
he
sought
to
avoid
"the
consecration
of
a
discipline."
"It
is
my
hope,"
he
insisted,
"that
semiology
will
replace
no
other
inquiry
here,
but
will
on
the
contrary,
help
all
the
rest,
that
its
chair
will
be
a
kind
of
wheelchair,
the
wild
card
of
contemporary
knowledge"
(my
italics).
Not
surprisingly,
Barthes
would
declare
then
his
wish
that
"Perhaps
we
shall
someday
be
able
to
return
to
the
description
of
these
semantic
consciousnesses,
attempt
to
link
them
to
a
history;
perhaps
we
shall
someday
be
able
to
create
a
semiology
of
the
semiologists,
a
structural
analysis
of
the
structuralists.
.
.
."
Over
a
decade
before,
in
"The
Imagination
of
the
Sign"
(in
Critical
Essays),
Barthes
had
likewise
sought
to
imagine
the
future
development
of
the
study
of
signs,
encouraging
us
to
"presume
an
extension
toward
much
wider
types
of
imagination,
which
we
may
find
mobilized
in
many
other
objects
than
the
sign."
Today,
as
I
accept
the
honor
of
this
chair
in
memetics—as
I
ease
myself
down
into
my
own
wheelchair
and
play
my
first
wild
card—I
like
to
think
Roland
Barthes
might
be
pleased
by
the
commitment
of
this
new,
homologous
discipline,
not
just
semiotics'
office
mate
but
its
compatriot
and
fellow‐
traveller,
to
"wider
types
of
imagination"
and
"many
other
objects
than
the
sign."
The Collected Works of David Lavery 2
For
memetics,
the
systematic
and
interdisciplinary
study
of
the
origin
and
dissemination
(Nietzsche
might
have
called
it
"the
genealogy")
of
ideas,
seeks
to
train
on
thought
much
the
same
kind
of
speculative
instruments
semiotics
turned
on
signs,
examining
their
biological,
anthropological,
and
psychological
genesis
and
historic
origin,
and
tracking
the
cause
and
effect
of
their
propagation,
their
communication,
socialization,
and
enculturation,
over
time
and
space.
A
glance
at
our
proposed,
still
tentative
curriculum
will
give
some
sense
of
the
uniqueness,
some
might
say
the
strangeness,
of
our
enterprise.
The
last
two
decades,
of
course,
have
not
been
kind
to
the
intellectual
playfulness
which
Barthes
hoped
for
and
this
newly
recognized
avenue
for
thought
now
champions.
I
need
not
remind
you
today
of
the
sordid
intellectual
history
of
the
1990s.
Why
resurrect
that
ugly
and
(hopefully)
aberrant
episode
in
the
life
of
the
mind?
Its
causes—a
reactionary
outburst
against
post‐
modernist
ambiguity
and
confusion
and
the
virulent
obfuscation
of
the
seventies
and
eighties,
the
rise
of
a
new,
McCarthyistic
logical
positivism
desirous
of
stamping
out
all
non‐scientific
forms
of
thought,
the
final
eradication
of
all
humanistic
psychology
in
the
name
of
behaviorism,
sociobiology's
temporary
usurpation
of
the
social
sciences,
in
short,
the
near
total
dedication
of
the
decade's
intellectual
energies
to
B.F.
Skinner's
dream
of
"the
eradication
of
mystery";
and
effects—the
demise
of
homo
ludens
and
the
tyrannical
reign
of
homo
scientificus,
the
abandonment
of
imagination,
the
disappearance
of
wonder,
the
virtual
elimination
of
historical
perspective,
the
withering
away
of
the
humanities
into
a
precious
few
outposts
in
renegade
university
communities,
like
this
one,
home
to
the
nation's
first
and
only
program
in
the
"history
of
consciousness"—are
well
known
to
all
of
you.
And
I
need
not
remind
you
that,
despite
Barthes'
cautions,
semiotics—now
memetics'
sister
discipline—played
a
not
entirely
innocent
role
in
the
totalitarian
project
of
the
'90s.
For
semiotics,
after
all,
always
had
"imperialist"
aspirations.
"One
could
.
.
.
assign
to
semiology
a
vast
field
of
inquiry,"
Jonathan
Culler
would
note
in
a
1976
monograph
on
Saussure:
if
everything
which
has
meaning
within
a
culture
is
a
sign
and
therefore
an
object
of
semiological
investigation,
semiology
would
come
to
include
most
The Collected Works of David Lavery 3
disciplines
of
the
humanities
and
the
social
sciences.
Any
domain
of
human
activity—be
it
music,
architecture,
cooking,
etiquette,
advertising,
fashion,
literature—could
be
approached
in
semiological
terms.
When
semiotics
was
touted
as
being
self‐reflective
and
self‐aware
in
a
way
no
other
discipline
could
claim
(semiotics,
Julia
Kristeva
once
insisted,
is
"an
open
form
of
research,
a
constant
critique
that
turns
back
on
itself
and
offers
its
own
auto‐critique");
when
we
heard
talk
of
semiotics
becoming
the
heart
of
the
university
curriculum,
when
the
study
of
literature,
art,
philosophy,
linguistics,
anthropology
were
proclaimed
to
be,
in
fact,
only
branches
of
the
master‐discipline
semiotics,
it
was
easy
to
recognize
the
intellectual
hubris
and
the
colonial
ambitions
of
this
still
less‐
than‐a‐century‐old
method
of
thought
and
analysis.
As
early
as
the
1980s,
the
study
of
signs
had,
in
fact,
even
exhibited
a
tendency
to
become—as
Jean
Alter
and
Dana
Polan
had
detected
at
the
time)
a
"terrorist
semiotics":
cliquish,
scientistic
in
expression,
neologistic
in
vocabulary,
inaccessible
to
all
but
the
initiated,
and
deeply
obscure.
In
the
new
intellectual
climate
of
a
new
century,
we
can
now
grasp
such
developments
as
typical
fin
de
siecle
intellectual
decadence:
as
the
natural
by‐
products
of
a
great
change
in
outlook
and
episteme,
of
Yeats'
"rough
beast
1
[slouching]
toward
Bethlehem
to
be
born."
The
gradual
emergence
of
memetic
inquiry
was
a
product
of
this
new
turn
on
the
spiral.
To
sketch
out
that
emergence
is
my
sole
task
here
today.
II
Memes,
of
course,
were
discovered
and
named
by
the
Gregor
Mendel
of
memetics,
the
British
ethologist
and
sociobiologist
Richard
Dawkins
in
1976,
who
thought
of
them
simply
as
units
of
"cultural
transmission,"
of
"imitation."
With
the
advent
of
human
culture,
Dawkins
argued
in
The
Selfish
Gene,
a
new
kind
of
replicator
was
introduced
into
the
processes
of
biological
evolution.
Since
the
"primeval
soup"
in
which
life
began,
genes
have
"propagated
themselves
in
the
gene
1
Edward Said foresaw this in Beginnings: Intention and Method, 384.
The Collected Works of David Lavery 4
pool
by
leaping
from
body
to
body
via
sperm
or
eggs,"
but
now,
in
the
new
"soup"
which
man
himself
stirs—what
Karl
Popper
designated
as
"World
3"—an
extra‐genetic
factor
has
been
at
work
inspiring
evolutionary
change,
which
in
the
hands
of
culture
is
incredibly
more
rapid
than
the
chancy,
hit
or
miss,
utterly
unscientific
methods
which
that
fledgling
scientist
"nature"
undertakes.
"Ideas
cause
ideas
and
help
evolve
new
ideas,"
the
great
brain
scientist
Roger
Sperry
once
observed,
writing—albeit
unconsciously—in
praise
of
the
power
and
generativity
of
memes.
"They
interact
with
each
other
and
with
other
mental
forces
in
the
same
brains,
in
neighboring
brains,
and,
thanks
to
global
communication,
in
far
distant,
foreign
brains.
And
they
also
interact
with
the
external
surroundings
to
produce
in
toto
a
burstwise
advance
in
evolution
that
is
far
beyond
anything
to
hit
the
evolutionary
scene
yet,
including
the
emergence
of
the
living
cell"
(quoted
in
Hofstadter,
Metamagical
Themas).
As
human
beings,
after
all,
we
live
in
the
midst
of
three
worlds,
realms
of
being
clearly
definable
and
yet
inextricably
dependent
on
one
another.
World
1
is
the
realm
of
physical,
material
things,
both
organic
and
inorganic,
the
world,
that
is,
of
matter
and
energy.
World
2
is
the
"subjective"
world
of
"minds,"
especially
the
capacious
biological
invention
known
as
the
human
mind;
it
includes—according
to
Sir
John
Eccles—"not
only
our
immediate
perceptual
experiences—visual,
auditory,
tactile,
pain,
hunger,
anger,
joy,
fear,
etc.,
but
also
our
memories,
imaginings,
thoughts
and
planned
actions."
World
3,
however,
the
world
of
"objective
knowledge,"
is
made
up
of
creations
which
result
from
the
interweaving
of
Worlds
1
and
2.
In
World
3
(in
the
words
of
Popper
commentator
Bryan
Magee),
we
find
"objective
structures
which
are
the
products,
not
necessarily
intentional,
of
minds
or
living
creatures;
but
which,
once
produced,
exist
independently
of
them."
Such
structures
occur
even
in
the
animal
world—ant
nests,
for
example,
or
spider
webs,
or
beaver
dams—and
often
are
of
central
importance
to
the
behavior
of
their
makers.
But
human
beings
have
added
to
World
3
not
only
our
own
imitations
of
nests
and
dams,
but
more
abstract—and
yet
in
many
respects
more
enduring—structures
like
language,
law,
ethics,
philosophy,
religion,
and
art.
Records
of
these
achievements
may
be
stored
in
World
1
so
that
their
existence
can
continue
independently
of
any
one
person's
claim
upon
them,
or
so
they
may
survive
during
an
era
unfriendly
to
their
existence,
but
their
essential
existence
is
in
World
3.
Underlying
these
new
structures,
Dawkins
claimed,
are
the
evolutionary,
"genetic"
factors
he
called
"memes"
(from
the
Greek
root
for
imitation—"mimesis"—
The Collected Works of David Lavery 5
but
altered
to
resonate
with
"gene"
and
suggest
as
well
"memory").
He
gave
examples:
"tunes,
ideas,
catch‐phrases,
clothes,
fashions,
ways
of
making
pots
or
building
arches."
And
he
suggested
how
we
should
understand
their
dissemination:
memes
propagate
themselves
in
the
meme
pool
by
leaping
from
brain
to
brain
via
a
process
which,
in
the
broad
sense,
can
be
called
imitation.
If
a
scientist
hears,
or
reads
about,
a
good
idea,
he
passes
it
onto
his
colleagues
and
students.
He
mentions
it
in
his
articles
and
lectures.
If
the
idea
catches
on,
it
can
be
said
to
propagate
itself,
spreading
from
brain
to
brain.
(I
think
here
of
Lewis
Thomas'
[left]
characterization—in
"On
Societies
as
Organisms"—of
scientists
scurrying
about
at
a
professional
conference,
exchanging
information,
as
an
assemblage
of
social
insects,
like
ants
or
termites!
and
of
Clifford
Geertz'
[right]
groundbreaking
characterization
of
academic
disciplines
as
"intellectual
villages,"
inviting
ethnographic
study
of
2
their
interaction
and
kinship.
Books,
World
1
objects,
have,
of
course,
been
instrumental
to
the
storage
and
dissemination,
the
incubation,
fertilization,
and
cross‐fertilization,
of
memes.
In
the
seventeenth
century,
when
the
Guttenberg
Galaxy
had
only
just
been
discovered
and
was
not
yet
even
charted,
the
magical
power
of
books
still
amazed.
A
John
Milton
2
In his "The Way We Think Now: Toward an Ethnography of Modern
Thought," Geertz writes:
could
observe
(in
Areopagitica)
that
"Books
are
not
absolutely
dead
things,
but
do
contain
a
potency
of
life
in
them
to
be
as
active
as
that
soul
was
whose
progeny
they
are;
nay
they
do
preserve
as
in
a
vial
the
purest
efficacy
and
extraction
of
that
living
intellect
that
bred
them."
And
that
strange
mind
Joseph
Glanvill
would
note
as
well
the
peculiar
nature
of
book‐borne
memes:
That
men
should
speak
after
their
tongues
were
ashes,
or
communicate
with
each
other
in
differing
hemisphears,'
Glanvill
wrote,
"before
the
invention
of
Letters
could
not
but
have
been
thought
a
fiction.
But
speak
they
do,
and
not
just
in
their
books.
For
memes
may
speak—speak
loudly,
and
their
authors
have
a
say,
even
without
an
actual,
direct
verbal
presence.
An
Aristotle,
a
Plato,
a
Confucius,
a
Christ
may
be
the
unacknowledged
ventriloquist
behind
both
men
in
the
street
and
historically
significant
"dummies."
"There
is
no
soulful
drawing‐room
balad,
no
cinema‐plot,
no
day‐dream
novelette
or
genteel
text
on
the
wall
of
a
cottage
parlor,"
Owen
Barfield
observes,
contemplating
the
history
of
the
western
concept
of
love,
"through
which,
every
time
the
hackneyed
word
is
brought
into
play,
the
authentic
spirit
of
Plato
does
not
peep
for
a
moment
forlornly
out
upon
us."
Or
consider
John
Maynard
Keynes'
oft‐quoted
observation
that
"The
political
fanatic
who
is
hearing
voices
in
the
air
has
distilled
his
frenzy
from
the
work
of
some
academic
scribbler
of
a
few
years
back."
Memes
are
these
"voices
in
the
air."
For
the
great
mind,
as
R.
G.
Collingwood
notes
in
his
consideration
of
history
as
a
form
of
thought,
"encapsulates"
its
ideas
for
easier
transmission
to
and
assimilation
by
the
future,
and
thus
memes,
physically
housed
in
libraries
and
in
other
World
1
institutions
like
museums,
or
passed
around
directly
from
one
mind
to
another
(as
often
happens
in
classrooms),
may
pass
through
the
digestive
tract
of
a
particular
World
2,
and
be
metabolized
into
a
body
of
ideas,
and
more
broadly
into
a
new
World
3
weltanschauung,
without
the
particular
consuming
mind
ever
knowing
the
origin,
or
the
real
nature,
of
the
ideas
it
has
"swallowed"
and
passed
on
in
transmuted
form.
("My
thoughts
are
my
whores,"
Diderot
knew
two
centuries
ago,
presciently
aware
of
the
promiscuous
nature
of
memes.)
Memes
thus
eventually
create
what
Allen
Wheelis
calls
a
"scheme
of
things."
The Collected Works of David Lavery 7
The
scheme
of
things
[Wheelis
writes]
is
a
system
of
order.
Beginning
as
our
view
of
the
world,
it
finally
becomes
our
world.
We
live
within
the
space
defined
by
its
coordinates.
It
is
self‐evidently
true,
is
accepted
so
naturally
and
automatically
that
one
is
not
aware
of
an
act
of
acceptance
having
taken
place.
It
comes
with
one's
mother's
milk,
is
chanted
in
school,
proclaimed
from
the
White
House,
insinuated
by
television,
validated
at
Harvard.
Like
the
air
we
breathe,
the
scheme
of
things
disappears,
becomes
simply
reality,
the
way
things
are.
It
is
the
lie
necessary
to
life.
The
world
as
it
exists
beyond
that
scheme
becomes
vague,
irrelevant,
largely
unperceived,
finally
nonexistent.
.
.
.
What
if
this
"scheme"
is
wrong,
untrue?
The
intellectual
hold
of
memes
upon
World
Three
is
not
easily
broken.
Though
as
arbitrary
as
any
sign,
memes
are
not
easily
3
falsifiable.
Now
Dawkins
even
suggested—through
the
words
of
a
colleague,
N.
K.
Humphrey—that
we
should
think
of
memes
as
"living
creatures,
not
just
metaphorically
[for,
God
forbid!,
that
would
be
too,
too
poetic,
hence
invalid]
but
technically."
When
a
meme
finds
a
fertile
mental
"culture"
in
a
particular
brain
and
starts
to
grow
there
(as,
for
example,
the
idea
of
"memes"
has
evidently
done
in
western
thought,
though
it
took
more
than
two
decades
to
find
a
niche),
it
is
"as
if,"
Dawkins
proposes
(again
using
Humphrey
as
his
mouthpiece),
the
originator
of
the
memes
in
question
has
"parasitized"
it—"the
way
that
a
virus
may
parasitize
the
genetic
mechanism
of
a
host
cell."
Admonishing
us
again
not
to
take
such
a
concept
as
"just
a
way
of
talking"
(for
science
is
not
just
idle
talk,
of
course,
and
not
just
poetry,
but
more
like
God
speaking
for
eternity!),
Dawkins/Humphrey
insisted
that
proliferation
of
a
given
meme
(their
example
was
belief
in
life
after
death)
can
be
understood
as
its
"actual,"
physical
realization,
"millions
of
times
over,
as
a
structure
in
the
nervous
systems
of
3
In Saving the Appearances, Owen Barfield, contemplating the natural
ability of an individual to correct its perceptions—to move naturally from "I
thought I saw" to "I found it was," questions what the result would be if a
mistake is shared by "a whole tribe or population." What happens "if the
'mistake' is not a momentary but a permanent one? If it is passed down for
centuries from generation to generation?" Barfield offers a tentative answer:
individual
men
the
world
over."
Nobel
Prize
Laureate
Jacques
Monod
called
this
the
"infectivity"
of
an
idea.
As
Monod
speculated
in
Chance
and
Necessity,
the
"performance
value
of
an
idea
depends
upon
the
change
it
brings
to
the
behavior
of
the
person
or
the
group
that
adopts
it.
The
human
group
upon
which
a
given
idea
confers
greater
cohesiveness,
greater
ambition,
and
greater
self‐confidence
thereby
receives
from
it
an
added
power
to
expand
which
will
insure
the
promotion
of
the
idea
itself."
Thus,
Monod
concluded,
an
idea's
"capacity
to
'take,'
the
extent
to
which
it
can
be
'put
over,'"
is
not
primarily
a
matter
of
truth
and
objectivity.
Ideas
"take"
because
of
their
"infectivity"
and
this
infectivity,
Monod
suggested,
"depends
upon
pre‐exisiting
structures
in
the
mind,
among
the
ideas
already
implanted
by
culture,
but
also
undoubtedly
upon
certain
innate
structures
which
we
are
hard
put
to
identify."
One
thing
is
certain,
however:
"the
ideas
having
the
highest
invading
potential
are
those
that
explain
man
by
assigning
him
his
place
in
an
immanent
destiny,
in
whose
bosom
his
anxiety
dissolves."
Like
many
of
the
scientific
ideas
of
its
time,
however,
Dawkins'
conception
of
memes
remained
essentially
reductionistic
and
became
an
inspiration
to
the
terrorist
reductionism
of
the
1990s.
(The
very
thesis
of
The
Selfish
Gene,
after
all,
had
been
that
only
genes
are
really
real;
that
organisms
are
only
the
genes'
conspiracy
to
create
another
gene.)
In
its
early,
formative
years,
the
idea
found
fertile
soil
in
the
minds
of
those
seeking
to
develop
artificial
intelligence
(the
word
"memetics"
first
appeared,
for
example,
in
a
book
by
Douglas
Hofstadter
[1985],
the
profoundly
creative,
but
entirely
reductionistic,
author
of
several
key
books
in
the
late
70s
and
80s
on
AI)
and
fellow
sociobiologists
determined
to
produce
an
even
more
reductionistic,
more
brain‐physical,
"epigenetic"
model
of
mental
functioning.
But
the
seminal
idea
of
memes
eventually
proved
to
be
extremely
suggestive
for
those
more
inclined
to
practice
the
"imaginal
reduction"
archetypal
psychologist
James
Hillman
had
called
for
back
in
the
1970s.
No
doubt,
the
epidemic
outbreak
of
Descartes'
Disease
in
the
late
90s
and
the
new
memetics'
central
role
in
diagnosing
its
source
in
the
mind/body
severing
memes
of
Cartesian
thought
and
stopping
its
further
spread
had
much
to
do
with
the
credibility
of
memetics
as
we
now
understand
and
practice
it.
III
The Collected Works of David Lavery 9
Descartes'
Disease,
as
you
no
doubt
recall,
was
a
bizarre,
at
first
undiagnosable
malaise,
afflicting
a
large
percentage
of
the
populations
of
the
world's
developed
nations,
in
which
the
victim
mysteriously
lost
his
or
her
"proprioception"—that
"secret
sense,
our
sixth
sense":
that
"continuous
sensory
flow
from
the
movable
parts
of
our
body—muscles,
tendons,
joints—by
which
their
position
and
tone
and
motion
is
continually
monitored
and
adjusted,
but
in
a
way
4
which
is
hidden
from
us
because
it
is
automatic
and
unconscious."
Without
proprioception,
the
victims
of
Descartes'
Disease
found
themselves
to
be
disembodied,
often
unable
to
walk
and
sometimes
even
to
stand,
or
hold
objects
in
their
hands,
or
to
coordinate
their
body
movements
at
all.
Their
hands
began
to
wander
about
of
their
own
accord
unless
tightly,
consciously
governed.
"This
'proprioception,'"
an
early
victim
of
the
disease
explained
in
Sacks'
seminal
1985
case‐history
of
the
disease,
"is
like
the
eyes
of
the
body,
the
way
the
body
sees
itself.
And
if
it
goes,
as
it's
gone
with
me,
it's
like
the
body's
blind.
My
body
can't
"see"
itself
if
it's
lost
its
eyes,
right?
So
I
have
to
watch
it—be
its
eyes.
Right?"
"Something
awful's
happened,"
Christina
explained
in
what
Sacks
described
as
"a
ghostly
flat
voice."
"I
can't
feel
my
body.
I
feel
weird—disembodied."
Gradually,
Christina,
like
many
later
victims
of
the
disease,
did
become
her
body's
eyes.
With
"every
move
made
by
artifice,"
she
learned
to
function
by
keeping
watch
on
her
own
substance
with
"almost
painful
conscientiousness
and
care."
Though
at
first
she
seemed
(as
Sacks
notes)
"as
floppy
as
a
ragdoll,"
she
managed
to
become
a
fairly
successful,
self‐engineered
automaton,
her
brain's
body‐image
gradually
gaining
control
where
her
own
"proprioceptive
body‐model"
had
once
normally
functioned.
But
such
adaptations,
Sacks
observed,
"made
life
possible—they
did
not
make
it
normal."
Her
recovery,
Sacks
comments,
represented
a
success
"in
operating,
but
not
in
being."
("Imagine
a
people
who
could
only
think
aloud.
As
there
are
people
who
can
only
read
aloud,"
Ludwig
Wittgenstein
suggests
in
Philosophical
Investigations.
Those
with
Descartes'
Disease—their
image
is
ineradicably
printed
on
our
memory—could,
it
seems,
only
act
"aloud,"
could
only
act
consciously.)
Many
could
only
perform
even
the
most
simple
functions
while
watching
themselves
in
a
mirror.)
4
I am quoting Oliver Sacks' explanation of proprioception in The Man
Who Mistook His Wife for a Hat.
T h e C o l l e c t e d W o r k s o f D a v i d L a v e r y 10
Ten
years
after
the
first
reported
cases,
medical
science
was
still
at
a
loss
in
pinpointing
a
cause,
let
alone
a
cure,
for
the
disease,
which
by
then
had
afflicted
thousands.
The
disease
Christina
fought
so
bravely
in
isolation
thirty
years
ago
(Sacks
calls
her
"one
of
those
unsung
heroes,
or
heroines,
of
neurological
affliction")
came
out
of
the
closet.
It
was
impossible
to
walk
a
city
street
without
being
overwhelmed
by
its
terrifying
presence.
It
was
as
if
a
new
species
of
automatons
had
sprung
up,
almost
overnight.
Almost
a
century
ago,
Wittgenstein
would
speculate
incredulously:
But
can't
I
imagine
that
the
people
around
me
are
automata,
lack
consciousness,
even
though
they
behave
in
the
same
way
as
usual?—If
I
imagine
it
now—alone
in
my
room—I
see
people
with
fixed
looks
(as
in
a
trance)
going
about
their
business—the
idea
is
perhaps
a
little
uncanny.
But
just
try
to
keep
hold
of
this
idea
in
the
midst
of
your
ordinary
intercourse
with
others,
in
the
street,
say!
Say
to
yourself,
for
example:
"The
children
over
there
are
mere
automata;
all
their
liveliness
is
mere
automatism."
And
you
will
either
find
these
words
quite
meaningless;
or
you
will
produce
in
yourself
some
kind
of
uncanny
feeling,
or
something
of
the
sort.
If
he
had
lived
into
the
nineties,
Wittgenstein
would
not
have
needed
to
resort
to
skeptical
imagining
of
such
an
impossible
spectacle
in
the
privacy
of
his
rooms
at
Cambridge.
For
his
"uncanny"
thought
experiment
had
incredibly
come
true
"in
the
street,"
before
our
very
eyes.
The
symptoms,
so
bizarre,
so
mind‐boggling—or
should
T h e C o l l e c t e d W o r k s o f D a v i d L a v e r y 11
I
say
"body‐boggling"?—which
Sacks
sought
to
describe
very
carefully,
lest
he
strain
the
reader's
credulity,
and
the
disease
which
in
the
mid‐eighties
must
have
sounded
like
science‐fiction,
became
as
common
and
as
identifiable
as
the
flu.
For
a
decade,
it
was
as
if
the
species
itself
had
lost,
or
was
about
to
lose,
its
proprioception.
It
was
as
if
mind
and
body,
always
an
uneasy
alliance
in
our
kind,
were—in
some
decisive
evolutionary
step—about
to
part
company
for
good.
The
disease
hit
us
where
we
lived,
attacked
our
weakest
metaphysical
link,
our
most
vulnerable
seam:
our
mind/body
dualism—at
once
the
surest
legacy
of
our
evolutionary
past
and
perhaps
our
very
essence
as
beings‐in‐the‐world.
"Man
is
to
himself
the
greatest
prodigy
in
nature,"
wrote
Pascal
in
the
17th
century,
"for
he
cannot
conceive
what
body
is,
and
still
less
what
mind
is,
and
least
of
all
how
a
body
can
be
joined
to
a
mind.
This
is
his
supreme
difficulty,
and
yet
it
is
his
very
being.
The
way
in
which
minds
are
attached
to
bodies
is
beyond
man's
understanding,
and
yet
this
is
what
man
is."
We
began
to
come
apart
at
the
seams.
At
first,
her
doctors
were
ready
to
blame
Christina's
malaise
on
that
old
bug‐a‐boo
hysteria;
then,
after
running
tests,
they
sought
a
physiological
source:
an
acute
"polyneuritis
.
.
.
affecting
the
sensory
roots
of
spinal
and
cranial
nerves
throughout
the
neuraxis."
Sacks,
more
philosophically
inclined
than
his
colleagues
(he
began
his
essay
by
discussing
none
other
than
Wittgenstein
on
the
question
of
doubting
the
reality
of
our
bodies)
was
not
so
content
with
reductionistic
explanations.
He
spoke
of
the
need
to
"think
phenomenologically"
about
Christina's
disease,
of
understanding
her
state
as
"a
genuine
phenomenon,
in
which
her
state‐of‐body
and
state‐of‐mind
are
not
fictions,
but
a
psycho‐physical
whole."
But
neither
Sacks
nor
subsequent
researcher
pursued
this
line
of
thought
until
the
late
1990s.
Not
until
victims
underwent
therapy
with
medical
semioticians
were
they
able
to
learn
to
"operate"
with
something
more
than
mechanical
precision.
It
was
not
until
my
own
demonstration—the
outgrowth
of
my
work
on
a
more
humanistic
approach
to
the
understanding
of
memes
—that
the
search
for
a
cause
in
neurology
or
physiology
or
even
genetics
was
understood
at
last
to
be
too
easy,
too
reductionistic
and
simple
minded,
that
researchers
like
myself
began
to
seek
instead
a
hidden
"philosophical"
source,
a
"causative‐agent"
to
be
found
not
just
in
our
soma
but
in
our
history
of
ideas,
our
"scheme
of
things"
for
this
psychosomatic
affliction
and
found
it
in
the
"memes"
of
its
discoverer
and
founder:
the
thinker
who
found
it
methodologically
expedient
to
"suppose
not
that
God,
who
is
most
good
and
the
T h e C o l l e c t e d W o r k s o f D a v i d L a v e r y 12
fountain
of
truth,
but
rather
that
some
evil
genius,
at
once
more
powerful
and
cunning,
has
bent
all
his
efforts
to
deceive
me,"
who
found
it
possible
to
"suppose
heaven,
air,
earth,
colors,
shapes,
sounds
and
everything
external
are
nothing
but
the
delusions
of
dreams
that
he
has
contrived
to
lure
me
into
belief.
I
will
consider
myself
not
to
have
hands,
eyes,
flesh,
blood,
or
any
sense,
but
as
falsely
thinking
myself
to
have
all
these
things."
As
a
result
of
my
own
work,
the
disease
was
named,
in
customary
fashion,
after
its
discoverer—or
should
I
say
founder?
The
true
"evil
genius"
was
given
credit
for
his
creation.
Aware,
finally,
of
Cartesianism
as
the
releaser
for
this
disease,
The
World
Health
Organization
undertook
immediate
and
successful
hygienic
educational
measures
to
prevent
the
further
spread
of
the
disease,
though
little
could
be
done
for
those
whose
minds
had
already
been
too
deeply
infected
by
the
hegemony
of
Descartes'
memes.
This
episode
in
the
history
of
medical
perception
stands
as
a
watershed
in
the
reign
of
modern
reductionism.
IV
Over
a
century
ago
Nietzsche
had
predicted
that,
"spurred
by
its
powerful
illusion,"
and
antithetically
piloted
by
an
epistemologically
innocent
method
and
the
grandiose
ambition
of
mastering
the
infinite,
science
would
inevitably
"suffer
shipwreck."
There
will
come
a
time,
he
prophecied
in
The
Birth
of
Tragedy,
when
"logic
coils
up
at
these
boundaries
and
bites
its
own
tail"
and
"suddenly
[a]
new
form
of
insight
breaks
through.
.
.
."
In
1969
Julia
Kristeva
could
characterize
semiotics
as
this
new
form:
as
"the
place
where
the
sciences
die."
Semiotics
she
wrote,
is
both
the
knowledge
of
this
death
and
the
revival,
with
this
knowledge,
of
the
"scientific";
less
(or
more)
than
a
science,
it
marks
instead
the
aggressivity
and
disillusionment
that
takes
place
within
scientific
discourse
itself.
Such
disillusionment,
we
now
know,
produced,
however,
not
a
willing
submission
to
self‐reflection
but
a
knee‐jerk,
quite
unscientific
revolt
against
any
mode
of
thought
not
certifiably
scientific.
But
semiotics'
own
momentary
contribution
to
such
positivism
does
not
gainsay
the
importance
of
Kristeva's
understanding
of
its
epistemological
role.
Twentieth
century
semiotics,
it
seems
to
us
now.
lacked
only
an
evolutionary
dimension—one
that
already
existed
at
the
time
Kristeva
wrote.
I n
R e n é
T h o m ' s
e v o l u t i o n a r y
t o p o l o g y ,
s e m i o t i c s
w a s
u n d e r s t o o d
a s
( i n
R o b e r t
Innis'
summation)
"the
effort
of
reality
itself
to
double
back
upon
itself
and
T h e C o l l e c t e d W o r k s o f D a v i d L a v e r y 13
thematize
itself,
revealing
in
the
process
the
isomorphism
between
'real
processes'
and
semiosis."
But
if
this
was
and
still
is
true
of
semiosis,
it
can
even
more
truly
be
said
of
memiosis.
Nietzsche's
new
form
of
insight,
oroboroic
in
theory
and
practice,
I
want
to
suggest,
is
memetics.
Charles
Morris
had
suggested
in
1938
that
semiotics'
great
significance
might
lie
in
its
potential
for
unifying
modern
thought.
In
a
1982
study
of
the
place
of
semiotics
in
the
life
of
the
mind,
Dean
and
Juliet
MacCannell
had
similarly
suggested
that
it
could
well
prove
to
be
the
most
fruitful
means
for
forging
a
new
consensus
between
the
hard
and
social
sciences
and
the
humanities.
It
would
now
seem,
however,
that
only
a
semiotics
working
in
tandem
with
memetics
can
hope
to
achieve
such
an
integration,
becoming
truly
"ecumenical"
(in
Thomas
Sebeok's
sense):
dedicated,
that
is,
to
the
belief
that
all
inquiry
into
meaning
must
assume
its
production
to
be
a
"normal
occurrence
in
nature."
And
so
today
we
celebrate
a
new
program
of
study
dedicated
not
to
the
"eradication
of
mystery"
but
to
the
interdisciplinary
exploration
of
what
we
might
call,
after
G.
Spencer‐Brown,
the
"original
mystery":
we
cannot
escape
the
fact
[wrote
the
British
mathematician
in
his
Laws
of
Form]
that
the
world
we
know
is
constructed
in
order
(and
thus
in
such
a
way
as
to
be
able)
to
see
itself.
This
is
indeed
amazing.
Not
so
much
in
view
of
what
it
sees,
although
this
may
appear
fantastic
enough,
but
in
respect
of
the
fact
that
it
can
see
at
all.
But
in
order
to
do
so,
evidently
it
must
first
cut
itself
up
into
at
least
one
state
which
sees,
and
at
least
one
other
state
which
is
seen.
In
this
severed
and
mutilated
condition,
whatever
it
sees
is
only
partially
itself.
We
may
take
it
that
the
world
undoubtedly
is
itself
(i.e.
is
indistinct
from
itself),
but,
in
any
attempt
to
see
itself
as
an
object,
it
must,
equally
undoubtedly
act
so
as
to
make
itself
distinct
from,
and
therefore
false
to,
itself.
In
this
condition
it
will
always
partially
elude
itself.
It
seems
hard
to
find
an
acceptable
answer
to
the
question
of
how
or
why
the
world
conceives
a
desire,
and
discovers
an
ability
to
see
itself,
and
appears
to
suffer
the
process.
That
it
does
so
is
sometimes
called
the
original
mystery.
T h e C o l l e c t e d W o r k s o f D a v i d L a v e r y 14
Out
of
this
mystery—because
of
it—are
born
all
semes
and
all
memes
and
indeed
all
the
means
by
which
we
try
to
fathom
them—all
the
sciences
and
all
the
humanities.
And
back
into
the
mystery,
Spencer‐Brown
reminds
us,
go
all
the
results,
for
in
seeking
to
plumb
the
abyss
we
only
deepen
it:
Perhaps,
in
view
of
the
form
in
which
we
presently
take
ourselves
to
exist,
the
mystery
arises
from
our
insistence
on
framing
a
question
where
there
is,
in
reality,
nothing
to
question.
However
it
may
may
appear,
if
such
desire,
ability,
and
sufferance
be
granted,
the
state
or
condition
that
arises
as
an
outcome
is
.
.
.
unavoidable.
In
this
respect,
at
least,
there
is
no
mystery.
We,
as
universal
representatives,
can
record
universal
law
far
enough
to
say
and
so
on,
and
so
on
you
will
eventually
construct
the
universe,
in
every
detail
and
potentiality,
as
you
know
it
now;
but
then,
again,
what
you
will
construct
will
not
be
all,
for
by
the
time
you
will
have
reached
what
now
is,
the
universe
will
have
expanded
into
a
new
order
to
contain
what
will
then
be.
In
this
sense,
in
respect
of
its
own
information,
the
universe
must
expand
to
escape
the
telescopes
through
which
we,
who
are
it,
are
trying
to
capture
it,
which
is
us.
The
snake
eats
itself,
the
dog
chases
it
tail.
Will
the
human
mind
ever
again
mistake
its
own
manifestations
for
reality,
ever
again
believe
in
Truths
independent
of
its
own
participation
in
their
creation,
ever
conceive
of
facts
not
dependent
on
fictions.
"The
greatest
sorcerer,"
Borges
noted
(citing
Novalis),
"would
be
the
one
who
bewitched
himself
to
the
point
of
taking
his
own
phantasmagorias
for
autonomous
apparitions."
And
of
course
we
are
this
sorcerer,
as
Borges
spent
a
lifetime
reminding
us.
We
(the
undivided
divinity
that
operates
within
us)
have
dreamed
the
world.
We
have
dreamed
it
strong,
mysterious,
visible,
ubiquitous
in
space
and
secure
in
time;
but
we
have
allowed
tenuous,
eternal
interstices
of
injustice
in
its
structure
so
we
may
know
that
it
is
false.
No
doubt
we
will
mesmerize
ourselves
again.
But
now
we
have
constructed
a
disciplined
means
for
waking
ourselves
from
hypnosis.
Out
of
its
positivistic
origins,
memetics
has
come
to
thrive
in
the
cracks
of
the
structure
of
human
sorcery
where,
along
with
semiotics,
it
stands
to
become
the
tail‐chasing
metameans
of
imagining
our
imagination,
of
prolonging
the
original
mystery.