You are on page 1of 12

THINKING DIFFERENTLY ABOUT DOING AI:

Toward a satisfactory epistemic methodology of


the strong hypothesis
I. Rationality Is Not Enough:
AI should simulate integral thought
Copyright 1999 Everett A. Johnston.

The discipline of artificial intelligence occupies a precarious position today. Although AI is the lofty ideal of
computer science, the goal of simulating the full range of human intelligence recedes further and further out to
the horizon. This is due in large part to AI theorists lacking standardized knowledge about subtle aspects of
human thinking which, collectively, have been called the cognitive faculties. This first of a series of three
articles deals with what is required if AI theorists are to build from first principles based upon the latest
findings of neurobiology, which have immensely expanded the possibility of understanding cognitive
faculties.
The definitions of intelligence to which AI theorists usually resort are taken from cognitive science, rather
than medical neurobiology. In as much as formal logic is the vanguard of Western thought, so, too, formal
logic was at first thought to be central to the budding acumen of cognitive science. Cognitive scientists have
reconsidered this notion, but still today because most in the field adopt some form of functionalism they
think of cognitive faculties as fashioned after archetypes of rationale firmly entrenched in the formal logic of
traditional rationalism.1 On the other hand, modern philosophy as well as the latest findings of neurobiology
challenge this convention. Formal logic is too narrow a basis upon which to explain what neurobiologists
have observed in their laboratories lately, nor to explain what philosophy's most apperceptive visionaries have
explicated concerning cognition over the last seventy years. Could a dogmatic adherence to methodologies
dependent upon tenets of formal logic be the root cause for AI not being as successful as it might be?
Aristotle's Logic portrays reason as dispassionate, sentential and combinative. But theoretically, at any rate, it
should be possible to reconceive logic and expand it beyond Aristotelian ideals through modeling reasoning
patterns after archetypes of cognition known as integral thoughts (called thus for their combining of reason
with emotion). Integral thoughts are the mental by-product of rationality integrated with affective factors.
Integral thoughts result from cognitive faculties acting in concert to achieve coherent meaning on many levels
of the psyche at once. Multi-faceted and self-contained, they are intrinsically associative and far from
being sentential and combinative nest within each other like so many Chinese boxes. There is much debate
as to their significance, but since Ernst Cassirer's comprehensive treatment of concept formation in his
Philosophy of Symbolic Forms (1957) no one can now credibly maintain that most of the conclusions of
which the mind is capable result from application of formal logic alone. However, the state of the art of

computing is such that even sophisticated programming languages manipulate data by algorithms which are
based exclusively upon formal logic, or its derivatives. Because the end result of AI involves a computer
program, after all, the tools of the artificial intelligence community have shaped an AI developed more or less
fashioned according to the Western paradigm of cognitive action a tradition increasingly recognized to be
antiquated. Of course, the business of AI theorists has always been not to reinvent thinking per se, but to
attempt the simulation of human intelligence as they currently understand it through computer programming.
Because they employ Western paradigms of cognitive action, though, system design has become a struggle to
stabilize simulations of intelligence which never seem to be fully reconciled to human thinking. Constrained
by pragmatic necessity to show steady progress, AI theorists have nevertheless managed, perhaps without
their realizing it, to develop AI systems in such a way as to partially compensate for this epistemic lack.
To do so, they have taken advantage of a notion about physical symbol systems first stated formally by
Newell and Simon. Any physical symbol system in and of itself has "the necessary and sufficient means for
general intelligent action." 2 Physical symbols are the least common denominator, the middle ground so to
speak, between two basic modes of thinking, each a recognizable manner of cognition in its own right: (1) the
mind's dynamic experience of subjective rationality, and (2) the objectification of this dynamic experience
through exercise of various means of representation. Both basic modes of thinking are important to normal
thinking processes, yet most AI theorists believe that the first, subjective rationality, falls outside the province
of their investigations. Consequently, to implement AI design, they unwittingly choose an axiomatization so
to favor hypotheses which smooth over the expected inconsistencies which must arise upon shifting emphasis
to one basic mode of thinking over the other. The customs and rules of thumb which contribute to this
"smoothing-out" process have enjoyed some success. In fact, a few AI researchers openly state that they
actively seek to make provision for human intuition in their AI system design.3 But no one has been able to
put forth a plausible general schema that codifies the exact means to implement such kinds of AI system
design.
In search of such a general schema, what appears to be mutual between the mind's dynamic experience of
subjective rationality and its objective representation is the physical symbols common to both. On the one
hand, although thinking as thought in the mind can be non-verbal it is most often subvocal and actually plays
off the physical symbols associated with speech by adopting formal linguistic structures for its own purposes.
Yet, on the other hand, it is obvious that before any thought (verbal or otherwise) can be expressed, it must be
rendered into some form of physical symbol system so that its meaning might be readily assimilated by an
audience.
Taking both manners of cognizing reality into account, AI theorists postulate a Physical Symbol System
Hypothesis to reconcile the inner against the outer world. Any physical symbol system in and of itself has "the
necessary and sufficient means for general intelligent action." As a corollary, a Principle Of Medium
Independence ensues which equates mere simulation of an automatic formalism system to the actual creation
of one. The writer/philosopher John Haugeland in his Artificial Intelligence: The Very Idea explains this
principle:
"... if Moe [an AI system] is based on a GOFAI [Good Old Fashioned
Artificial Intelligence] theory of Curley [some human], then [Moe]
actually has a mind of his own and literally understands. ...
[B]rain cells and electronic circuits are manifestly different media;

but, maybe at some appropriate level of abstraction, they can be


media for equivalent formal systems."4
By ascribing to both the Physical Symbol System Hypothesis and to the Principle Of Medium Independence,
a theoretical framework is established with the presumption that little, if any, difference exists between a
human mind and a computer running sufficiently sophisticated AI programs.
Yet people in general find it difficult to conceive that, as far as intellectual affairs go, the human mind is an
automaton. Are people fraught with hubris, then, that they should balk at the concept? Hardly. Sentience and
cognizance are not hypothetical to us; at the personal level, we all are steeped in them and well-experienced
with their intricacies. Self-consciousness is a feeling which can be referred to. Since the time of Rene
Descartes, the predecessor of modern philosophy, it is also a feeling which can be referred to by the act of
reason itself. "I think, therefore I am," Descartes stated. Existence is reason's most immediate implication.
Philosophers of the twentieth century, such as Jean-Paul Sartre, have elaborated upon this maxim and come to
general agreement that existence for human beings means self-conscious existence. Hence, the modern
meaning of Descartes' maxim is that self-conscious existence and reason are inextricable; "I think, therefore I
am as self-aware." In other words, a human being is an ego as well as a thinking mind.
This is the flaw in the Physical Symbol System Hypothesis. In addition to offending our sensibilities, equating
intelligence to adroit symbol manipulation is an assumption which flies in the face of the last four hundred
years of philosophical inquiry. While it might be useful for many purposes to abridge the meaning of
intelligence by reducing it to formal logic, and thereby sanitize it of all reference to self, it is pointless to
thereafter deny this reduction and attempt a reconstitution of cogitative substance through the introduction of
a Principle Of Medium Independence. Treating intelligence in this way is no better than what over-eager
industrialists did with bread. They introduced white bread, a loaf made from bleached flour and synthetic
nutrients. Gradually, it became obvious to everybody that raw flour contains lots of nutrients and need not be
bleached in the first place.
Yet AI theorists continue to "bleach" intelligence by presupposing a Physical Symbol System Hypothesis
which posits that no additional understanding of the chosen medium for cognitive functioning other than a
command of some physical symbol system is needed in order to achieve general intelligent action. Symbol
manipulation is all that is required to produce an artificially intelligent system. To make up for the cogitative
substance lost by dismissing the question as to what part the nature of the medium plays for intelligence, they
likewise summarily presuppose a Principle Of Medium Independence which posits that the medium which
articulates such an artificially intelligent system is not important to its general intelligent action.
But these assumptions beg the question. Western tradition provides a system of symbol manipulation (i.e. its
logic), but no thoroughgoing theory of intelligence; AI theorists are in the business of simulating human
intelligence, yet, loathe to stray from the Western tradition, they assert that the West's system of symbol
manipulation is itself natural intelligence. What's more than that, AI theorists also ignore the fact that physical
symbol systems are meaningless in and of themselves. But intelligibility is not an intrinsic property of
systematically organized and interconnected relationships; intelligibility involves the comprehender, not just
the comprehended. One must speak of context before one can legitimately speak of intelligibility. Semantic
content logically precedes syntax.5 Context is ultimately grounded in the comprehender's conceptual world,
which processes incoming data not only according to standards of categorical understanding but also
according to standards of sensibility.
For this reason, the human mind is not merely an inference engine; it is a contextualization engine. The

human mind incorporates the relationships that it organizes, and makes use of their interconnectedness in such
a way that it thereby modifies its own character as a medium for intelligent action. An entire frame of
reference can be seen to shift in this way, sometimes in catharsis as with Eureka experiences of sudden
insight. (We see this experimentally in infants that develop in an environment rich with activity and personal
interaction. They tend to have accelerated neural development. Beyond that, neurobiologists have also
recently discovered that, when presented with sudden loss such as an amputation, already mature neural
tissues begin a steady process of reintegration. Areas of neural tissues, which previously oversaw what
ultimately became an absent limb, become rededicated to now help oversee body parts administered by
contiguous neural tissues.)6
Besides begging the question, AI theorists also belie the fact that neither the Physical Symbol System
Hypothesis nor the Principle Of Medium Independence adequately defines what is meant by "general
intelligent action." The term is essential to either hypothesis, but is only a relative phrase, after all. As with
any relative concept, the phrase "general intelligent action" requires firsthand experience of pertinent
absolutes if it is to actually become meaningful (as opposed to being merely formal). Otherwise, what
fundament can the concept of general intelligent action have? None, if as a relative concept it is based upon
other relative concepts that, in turn, are based upon yet other relative concepts ad infinitum ? The notion of
general intelligent action requires a fiducial standard to serve as its metric, against which to measure its extent
and bearing. No formal definition of such an absolute has as yet been attempted, perhaps because none has
seemed necessary in order for human beings to descry the presence of general intelligent action about them in
their environment. Intuitively, people understand the measure of general intelligent action lies within their
own mind.
People from all walks of life feel that they can readily determine the intelligent nature of discriminating
behaviors, whether exhibited by humans or animals. Likewise, action that results from habit and instinct
seems equally obvious and straight-forward - as does mere happenstance. Human beings are innately
conscious of their own mental processes and many find it easy to judge the relative rationality of behavior
they observe by interpolating from their own mental experience how the observed behavior might originate in
the consciousness of the living thing observed. Such determinations do not rely upon some theoretically
sophisticated axiomatic understanding of mind by which the degree or quality of the reasoning or intelligence
exhibited may be educed. Rather, human observers rely upon personal experience of their own mind to inform
them as to whether or not a particular behavior exhibits rational traits. Observers may not even be able to
discriminate which type of rationale came into play to produce the given behavior, yet they can feel confident
overall that, whatever the basis for such behavior, it exhibits rationality. A sense of the rationality of the
behavior obtains, however idiosyncratic that sense might be.
Semantically, this subjective sense of rationality is a long way from a set of standardized definitions of mental
functions that could help computer scientists further AI. The fundamental difficulty of programming a
computer to simulate natural intelligence stems from this inability to reduce what is subjective about
rationality to simple combinations of short-term heuristics and rudimentary algorithms. Twentieth century
philosopher, the late Martin Heidegger, might have contended that such intractability is due to our failing to
understand that natural intelligence is always, at the least, a dynamic system. Intelligence is evidence of a
thinking, he would have argued, and not merely of a mindset. As with life in general, rules that govern actual
thinking tend to vary as time and circumstances change. Hence, real-life consequences and the generations of
outcomes which result must transcend beyond the rules of the West's traditional logic as we know it. It can be
no other way, Heidegger would have emphasized. There is a clear-cut dichotomy between the subjective
experience of rationality and any objectification thereof, between "thinking" and "scientific representation,"7
and not just because of obvious differences in the medium which expresses rationality (i.e. whether via

cognition or paper).
This is because the warp and woof of our inner reality depends upon many other antecedents than just those
which happen to obey the rules of formal logic, i.e. what has loosely been called the Aristotelian canon. When
Aristotelian antecedents exist, restricting further discussion to them and their consequents does have the
advantage of ensuring the regularity with which outcomes can be reproduced (i.e. consistency), the
comprehensiveness of supporting reasons (i.e. thoroughness), and the homogeneity of consequent possibilities
(i.e. cogency). But attempts to schematize real world situations thereby are most often only trivially
successful, because the resulting schematization lacks true correspondence to our apperception of reality. This
apperception of reality includes thinking, undoubtedly. Yet compared to the scope of anyone's experience, the
breadth of understanding exhibited by the West's traditional logic is shallow indeed. Nonetheless, the premise
undergirding scientific investigation as it has come down to us through the centuries is to study things as they
are, with as little distortion due to the human condition as possible. This premise was carried to an extreme by
early twentieth century behavioral psychologists and linguists who adopted a doctrine that prohibited direct
consideration of mental processes, calling the mind a "black box." Paradoxically, exponents of the modern
science of intelligence known today as artificial intelligence seem not to be able to make much more headway
than did those behaviorists. In fact, progress in AI is routinely measured by the ingenuity with which its
theorists deftly avoid direct consideration of incipient mental processes which actually underlie natural
intelligence, even as they achieve AI programs which display some semblance of human reasoning.
Connectionist methods epitomize the trend.
This is nothing unusual, of course. People from all intellectual disciplines try their best to avoid considering
such incipient mental processes, because to do otherwise would mean that someone somewhere would have to
chart the mind in all its detailed intricacy. The West's traditional logic was devised expressly to obviate the
need for such detailed knowledge as to how minds actually think. Aristotle seems to have based logic upon a
paradigm of reasoning especially designed to best avoid addressing the apparently intractable issue of
subjectivity. There appear to be good reasons for him to do so. Although he was not so naive as to think that
the mind is a "black box" and thus inscrutable, perhaps he did understand that to undertake exhaustive
charting of the mind's abilities at so early a stage of investigation would be folly. Therefore, all that was
sought were "universal rules" sure to hold for many situations. Such a reduction can at best only be
approximate, of course, because of the complexity of the mind. So, although today Western thought depends
upon adherence to formal logic systems (or extensions of logic based upon the concepts of formal logic), in
fact, the Aristotelian fundament upon which formal logic was built was only intended to represent the way
that the mind should think, not the way it actually thinks.
For this reason there is no way to draw one-to-one correspondence relations between dynamic experiences of
subjective rationality and the static infrastructure of traditional logic systems. However, rather than fault
traditional systems of logic, AI theorists have instead sought to program in ways which circumvent the
difficulty. They took advantage of object-oriented programming (oops) techniques because these were
designed to facilitate the creation of modular programs which pass program control on a (so to speak) "need
to know" basis, thereby avoiding the problem of simulating the mind's global scope of comprehension and its
contextualization abilities. AI theorists also began to incorporate into their programs clever generalizations of
traditional logic which relax its deterministic character, and which could be incorporated into already
developed algorithms without too much reworking. By so doing, AI theorists could program so as to treat a
greater number of situations in terms of already well-known classical problems. Their programs could be
made to derive answers which account for "grey" areas in the raw data in terms of
"fuzzy" equations, rather than in terms of responses to subtle nuance in the situation's character (as humans
tend to do).

In its favor, it is relatively easy to write oops code so that, regardless of whatever other processes of inference
that may be going on to accomplish a certain task, the program will approximate the sorting behavior of a
human being. It will automatically develop a classification system on the fly in the process of tallying raw
data it encounters, and can even be programmed to do so in a heuristic manner (i.e. without resort to a stock of
predefined categories included by the programmers). The classes the program develops exist in a hierarchy of
ancestors. Each level of the hierarchy is composed of classes of data which resemble one another, or are in
other ways related. These levels closely parallel the concept of rhetoric known as a "universe of discourse"
and have been employed to represent such diverse things as departments of retail goods, alternatives of game
strategy, and ranges of medical vital signs. Once established, a reliable hierarchy makes it so that program
focus need not be diffused over the data pool's entire gamut. Focus can be, in effect, temporarily restricted to
consideration of just certain universes of discourse (levels of the hierarchy of ancestors), just those needed
moment by moment to secure a particular inference at hand. By momentarily limiting a program's scope in
this manner, limited goals of deduction which contribute to an overall solution can be achieved without
extraneous considerations left over from other, nonpertinent universes of discourse getting in the way.
Disparity between the end result of the program's "reasoning" and that which issues from a human being's
common sense is diminished in this way. Common sense keeps human reasoning on point, not other-directed,
and the generation of a hierarchy of ancestor classes simulates this human propensity. But oops programming
techniques are innately limited. As the size and diversity of the data pool increases, the amount of forethought
involved becomes taxing on the programmer who must decide which criteria the program will use to build a
hierarchy of ancestors, or which criteria the program itself will use to decide which criteria it will use to build
a hierarchy of ancestors.
Another problem with oops is that it is not at all certain whether gains and significant results computed using
data from an individual universe of discourse guarantee similar significance at the macro level of the
aggregate hierarchy (as research into monotonic reasoning indicates).8 The technique of excluding
considerations taken from other levels of the hierarchy might actually serve to hopelessly bias results, rather
than merely simplify them. At best, this piecemeal technique of passing program control on a "need to know"
basis produces an imprecise conformal mapping onto human reasoning that is bound to be riddled with
many-to-one relationships and non sequiturs, products of the inability of formal logic to account for
psychologisms and paradoxes which are inherent to actual thinking.
To extend the range of formal logic and thereby ameliorate this inability, generalized counterparts to
traditional logic have been devised by mathematicians, and AI researchers are employing them. But because
these alternatives only extend the range of traditional logic and do not address its fundamental inadequacy,
they offer little additional insight into human rationality than do the ancient methods. Fuzzy logic extends the
concept of "logical value" to include all decimal numbers between zero and one; therefore, it is merely a
direct application of probability theory to that artifice of formal logic known as the truth table. AI applications
incorporating fuzzy logic usually avoid combining the reasoning scheme of fuzzy logic with other techniques
such as multivaluedness and nonmonotonicity, because of the complexity which such composites generate.
The methodology of the statistics employed to integrate fuzzy logic into AI programming is cumbersome
enough already without compounding the difficulties by introducing several distinct, yet simultaneously
employed, logical methodologies in addition. Multivalued logic is a similar generalization of formal logic as
is fuzzy logic in that, just as the number of possible logical values is increased in fuzzy logic, the number of
allowable types of logical values are increased in multivalued logic from merely two or three (i.e. true,
false, or nil) to perhaps infinitely many. And likewise nonmonotonic reasoning merely extends conventional
monotonic reasoning, in that it is a generalization of formal logic's convention that only a single resultant
value may obtain from simultaneously considered propositions. That is to say, more than one conclusion may

be drawn from a given set of premises under nonmonotonic reasoning.


The above innovations in the technique of formal logic are only those which are most often cited, but do
reflect a movement among AI theorists that sees a need for closer agreement between formal logic and some
aspects of human thought. Yet unsatisfactory progress in AI bears witness that such artifices, separately or in
combination, do not make substantial advances in that direction. This is because they are merely extensions of
what has always been only a semblance of human reasoning (i.e. the formal logic of traditional rationalism),
rather than extensions to a heuristic theory encompassing the principal manner by which the mind actually
produces objectifications of rationality.
Charting the provinces of the mind which produce such objectifications of rationality is quite an undertaking.
Ask any neurobiologist. In lieu of this effort, AI theorists have resorted to various definitions of intelligence
premised upon traditional rationale. Of course, in deference to scientific realism most modern definitions of
intelligence have been framed with the intention of rooting out arguments based upon an internal sentient
observer, historically known as the homunculus. Besides both philosophers and psychologists realizing that
such "ghost in the machine" arguments beg the question, computer scientists seem to have understood that
models of the mind based upon the expedient of such an internal observer with its built-in artifices of
inherent life experiences and contextualization abilities are ultimately nonanalytic. That is, they fail to
provide one with neatly compartmentalized mechanisms which hang together to fully explain intelligence.
Plainly, any theory of intelligence should be analytic in order for it to be programmable. Of course, many
philosophers and at least one prominent neurobiologist take issue with the premise that natural
intelligence cannot be studied with rigor except in strictly rational terms, because natural intelligence is
intrinsically a holistic phenomenon and thus noncompartmentalizable.9 Their writings suggest that, rather
than reduce natural intelligence to a hollow shell of its true nature, AI programmers should seek to understand
its idiosyncracies by conceiving them as perturbations in a general theory of intelligence. Such a general
theory would be analytic, but would also be augmented by correction factors which account for the distortion
which the requirement of analyticity introduces. Many scientific theories are set up to incorporate perturbation
factors. In some manner, it is agreed that AI programs should be developed so as to allow for deviations from
the simulation of ratiocination to be factored in, mirroring the way that affective components of natural
intelligence skew rationale.
But AI theorists are unaware, ignore, or sometimes spurn such counsel. Although current AI efforts are
diverse, the vast majority of AI theorists rely upon reasoning which remains true to inculcated tradition and
which is, thus, incongruent to human reasoning. But many philosophers have been adamant about the
importance of incorporating at least some notion of emotional content into schemes of rational inference,
including AI algorithms. In his Artificial Intelligence: The Very Idea, John Haugeland asserts, " ... actual,
current feelings (such as embarrassed twinges, vicarious excitement, and the like) may be essential factors in
real-time understanding."10 He goes further and insists that the incorporation of mood into AI systems is one
essential difficulty which AI theorists must overcome if their discipline is ever to expand its horizons beyond
that which reasons from within a microworld to that which can reason within the macroworld of human
reality.11 Recently, Haugeland's concerns took on renewed urgency as his philosophical speculations appear to
have been confirmed by recent neurobiological experiment (we shall describe this presently). AI theorists
should consider what the experimental data of neurobiologists has to say about the nature of thought, then
seek to frame a truer realization of natural intelligence by means of an analytic that accounts for this data.
In other scientific disciplines when a problem is approached at such a fundamental level, beside experimental
data, what seems key is the postulation of some invariant(s) which remains independent of its frame of
reference. Invariants are to an analytic what a priori are to a metaphysics semantic centers about which

revolve the thoughts of the mental universe. Resort to a Physical Symbol System Hypothesis and a Principle
of Medium Independence seems to have been an early but misguided attempt by AI theorists to introduce
such fiducial standards into discussion. Like any hard science, it would appear that AI also needs some analog
of the mathematical concept of metric. Such an idea is novel, but not completely foreign to the study of logic.
A notion as to the logical distance between any two consequents is contained in the archaic usage of the word
"propinquity." At one time, the meaning of propinquity entailed a connotation taken from logic which
indicated the degree of adjacency or closeness between any two consequents (one a principal and the other its
derivative). The term was often used in rhetoric to refer to the degree of similarity in adjacency between two
sets of consequents (for the sake of analyzing the validity of a pivotal analogy key to some argument). An
invariant metric principle of unique "least distance" between two logical consequents would mean that,
regardless of their logical context, the measure of their adjacency would remain the same.
A well-defined field of mathematics, measure theory, is available to ensure the uniformity of the
implementation of such an application of this concept of propinquity to automated methods of inference.12
Just as with most metrics, this "least distance" would be very small in comparison to the majority of
measurements between consequents and could be used as a measure over such extensions, just as inches and
feet are used to measure the mile and can be converted to centimeters, seconds of arc, lengths of string, etc.
The logical distance between special (or "bivalent") pairs of consequents separated by a single unit of such a
theoretical invariant metric might appear to be nil, far less than the extensity produced by, say, a single
nontautologous illation of deductive logic; yet it nevertheless would theoretically remain discernible, in that
the formal character of the relationship between the principal consequent and its derivative could be taken to
surpass mere redundancy. What's more, even if any two such consequents were placed into an arbitrary logical
context, the "logical distance" between the two would still remain the same, regardless, because of the
invariant nature of the single metric unit which unites them. A theory of intelligence based upon such a metric
unit could be employed to bridge the gap between the two basic modes of thinking mentioned above: (1) the
mind's dynamic experience of subjective rationality, and (2) the objectification of this dynamic experience
through exercise of various means of representation. This is because such invariants would make it possible
for the conclusions of an automated formalism system based upon a relatively robust (i.e. more
comprehensive) logical environment to be readily translated into logical statements couched in terms of an
automated formalism system with a comparatively less robust logical environment. In other words, it might be
possible for conceptual-level semantics to be translated into a more standard context, even one based upon the
West's traditional logic.
Modern neurobiology points to a physiological basis upon which to start looking for a source of such metric
units (and other logical invariants), although from such discoveries it is not yet at all obvious what the guiding
rules governing such geodesic principles ought to be, nor even that the formulation of such rules of necessity
must be undertaken in a manner completely devoid of traditional approaches. Neurobiologist Antonio
Damasio's book, Descartes' Error: Emotion, Reason, and the Human Brain (1994), recounts the past twenty
years of research into the brain. Damasio's personal research involved unfortunate persons who suffered brain
damage which left them unable to feel emotions. His findings and those of his colleagues has led Damasio to
conclude that, " ... feelings are as cognitive as any other perceptual image ...," and that, " ... feelings have a
say on how the rest of the brain and cognition go about their business."13 In fact, it is Damasio's informed
opinion that, "Rationality is probably shaped and modulated in body signals, even as it performs the most
sublime distinctions and acts accordingly."14 In other words, one cannot arbitrarily separate rationality from
an understanding of the human organism taken as a whole, as existing in society.
Startling as this conclusion may be, Damasio is uncompromising and, backed by scientific proof, goes on to
conjecture that a " ... triggering of activity from neurotransmitter nuclei ... can bias cognitive processes in a

covert manner."15 He asserts that nonconscious "signal body states" are always hovering in the background of
consciousness. These track the pulse of an aggregate mental fundament which underlies all cognition. This
aggregate is comprised of a priori conditions for knowledge in general, relationships among all known
universals, and intuitions about potential ramifications which lie dormant within legitimate universes of
discourse. Higher level cognitive faculties can make overt reference to this aggregate mental fundament, but
more commonly do so only indirectly because of constraints of semantic complexity and, therefore, of time.
Such reference is important because it is instrumental in eliminating prospective results of pure logic which
defy common sense. More importantly, signal body states can actually steer the deliberations of higher level
cognitive faculties by being activated,
" ... but not ... made the focus of attention. Without attention, neither
[the signal body states nor their corresponding aspects in the
aggregate mental fundament] will be part of consciousness."16
By default, certain aspects of the aggregate mental fundament become referred to the mind over others and,
effectively, a bias is set up in cognitive processing which corresponds to the slant signal body states cast. This
means that signal body states are essential for any human intelligence just to remain on point. What's more,
they are essential to creative reason because they keep it from becoming ill-defined and from straying outside
the bounds of its pertinent universe of discourse.
This theory of signal body states is called by Damasio the somatic marker hypothesis. He documents quite a
bit of evidence to support it in his book, and to the degree that neurobiology verifies it it will make the abovementioned Physical Symbol Hypothesis and Principle of Medium Independence of AI theorists untenable.
This is because the somatic marker hypothesis implies that cognition is a unique medium for the expression of
rationality. The workings of natural cognition involve many distinct types of systems of symbols which
nevertheless act together holistically on many levels, each inextricably intertwined. It is comprised of an interworking of integral thoughts,
not a checklist of ratiocinations. No symbol system with a linear, static infrastructure could replace the mind's
nested character, nor even straightforwardly replicate it, whether that symbology occupies paper or computer
chips. To a neurobiologist, the phenomenon of cognition does not seem to possess the quality of inherent
analyticity. The mind arises from "parcellated" mental activity.17
Despite the scientific fact of this parcellation, it nevertheless remains theoretically possible to develop an
analytic theory of intelligence which AI theorists could then use to design simulations of natural intelligence.
Precisely because mental activity is parcellated, it is possible to determine whether or not it is metrizable. If it
is, then theory could be constructed, which would be an analytic only in the formal sense (for the sake of
programmability). It would be an analytic arrived at not through a dissecting, empirical approach but through
theoretical speculation a covey of proposed laws of thought developed according to some tentative overall
strategy, rather than heuristically.
With this in mind, the situation confronting the AI community today appears to be similar to that which faced
physicists early in this century. The response made by the physical science community to the breakdown of
classical physics still seems counter-intuitive and arcane when considered today. But numbers generated by
the theoretical calculations of the two young physicists Einstein and then Heisenberg compared favorably to
numbers produced in scientific experiment. Such successful predictions upheld outlandish premises which,
decades later, Western culture has now whole-heartedly embraced as truths.

Albert Einstein believed there to be often no other way to make many problems tractable except to theorize
first, and then ask questions later. The very problem of modeling cognition seems itself to be a question of this
sort. We have seen above that, from its current state of the art, AI cannot be developed in a straightforward
manner into a discipline that is commensurate or even viable with current neurobiological facts, because it is
dependent upon traditional paradigms of logic which science has shown to be no longer apropos for such
investigation. If AI is to fulfill its promise, a radically new approach is required that entails a thorough-going
reassessment of human reasoning from first principles.
This reassessment must in some way entail the concept of integral thought. This will require creative
methodology and artful illustration that is responsive to the full range of human experience. Approaches that
are conventional seem at odds to finding a solution. If any representation of intelligence can be true to life, it
will be so because such a representation happens to be conceptualized full-blown from the start, as an
educated guess which fortunately proves to be experimentally verifiable in its particulars as to the nature of
intelligence. Many rigorous scientific disciplines depend upon such contrivance all the time. It is not clear
how to expedite such paradigm formation. No blueprint is available to be followed. Although the model must
be algorithmic so that it may be represented in programming language and run on a computer, its formulation
shall be a creative act and not an analysis per se.
In the two articles to follow, we shall entertain ways to accomplish such theorization. The first of these two,
Hierarchical Planning Is Not Enough: AI should espouse the origin of common sense, argues that the entire
concept of AI should be regrounded. Programming machines to simulate natural intelligence should be
recognized for what it is, the process of revealing an ontological connection between electronics and human
beings. AI is not something that takes place "out there" in a piece of electronics, but takes place "from within"
the essence of humanity at a certain level of abstraction where electronics and their human creators
unite as a single existential entity. The third, Holographic Intelligence Is Not Enough: AI should acknowledge
the ontology of reason, includes part of the first chapter of a book in progress with the working title,
Curiosity: Application Of The Learning Emotion To Artificial Intelligence. That book will outline a sketch of
a theory of facultative reasoning which is delineated in terms of an abstract realm known as the reasoning
milieu. A taxonomy of qualitative attributes of subjective consciousness is outlined which details analytic
descriptions capable of engendering algorithms that are incorporable into AI programs. This theory is briefly
evaluated in terms of the AI community's major schools of thought.

Endnotes
1 Consciousness In Philosophy And Cognitive Neuroscience, ed. by A. Revonsuo & M. Kamppinen, Hillsdale, NJ: Lawrence Erlbaum
Associates, Publishers, 1994, p.6.
Preston, B., Representational and Non-representational Intentionality: Husserl, Heidegger, And Artificial Intelligence, Boston
University Graduate School Ph.D. thesis, order number 8813649, Ann Arbor, MI: University Microfilms Inc., 1988, pp. 5-6.
2 Newell, A. & Simon, H.A., "Computer Science as Empirical Inquiry," in Mind Design, Ed. by J.
Haugeland, Cambridge, MA: The MIT Press, 1981, pp.41-43.
3 Chapman, D. & Agre, P., "Abstract Reasoning As Emergent From Concrete Activity" in Reasoning About Actions And Plans, ed. by

M. Georgeff & A. Lansky, Los Altos, CA: Morgan-Kaufman Publishers, Inc., 1987.
Pollock, J., How To Build A Person: A Prologomena, Cambridge, MA: The MIT Press, 1989.
Geffner, H., Default Reasoning: Causal And Conditional Theories, Cambridge, MA: The MIT Press,
1992.
4 Haugeland, J., Aritificial Intelligence: The Very Idea, Cambridge, MA, The MIT Press, 1985, p.243.
5 Searle, J., "The Problem Of Consciousness" in Consciousness in Philosophy And Cognitive Neuroscience, ed. by A. Revonsuo & M. Kamppinen, Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers, 1994, p. 103.
6 Ramachandran, V., in the episode "Touch," part of the television series The Senses, Public Broadcasting System, aired Feb.22, 1995, 8:00 pm EST.
7 Mehta, J., Martin Heidegger: The Way and the Vision, Honolulu, HI: University Press of Hawaii,
1976, p.434.
8 Geffner, H., Default Reasoning: Causal and Conditional Theories, Cambridge, MA: The MIT Press, 1992.
Marr, D., "Artificial Intelligence: A Personal View," in Mind Design, Ed. by J. Haugeland, Cambridge, MA: The MIT Press, 1981, pp.
132-134.
Haugeland, J., Artificial Intelligence: The Very Idea, Cambridge, MA: The MIT Press, 1985, p. 195.
9 Preston, B., Representational and Non-representational Intentionality:Husserl, Heidegger, And Artificial Intelligence, Boston
University Graduate School Ph.D. thesis, order number 8813649, Ann Arbor, MI: University Microfilms Inc., 1988.
Rosenschein, S., Formal Theories Of Knowledge In AI And Robotics, in New Generation Computing, Vol. 3, No. 4, pp. 345-357, 1985.
Searle, J., "Minds, Brains, And Programs," in Behavioral and Brain Science, Vol. 3, pp. 417-457.
Dreyfus, H., Being-in-the-World: A Commentary On Heidegger's Being And Time, Cambridge, MA:
The MIT Press, 1991.
Block, N., "Troubles With Functionalism," in Readings In Philosophy Of Psychology , (pp. 268-306).
Cambridge, MA: Harvard University Press.
Damasio, A. Descartes' Error: Emotion, Reason, And The Human Brain, New York, NY: G. P. Putnam's Sons, 1994.
10 Haugeland, J., Artificial Intelligence: The Very Idea, Cambridge, MA: The MIT Press, 1985, p. 240.
11 Haugeland, J., Mind Design, Cambridge,MA: The MIT Press, 1981, p.41.
12 Halmos, P.R., Measure Theory, Princeton, NJ: D. Van Nostrand Co, Inc., 1950.
13 Damasio, A., Descarte's Error: Emotion, Reason, and the Human Brain, New York, NY: G.P. Putnam's Sons, 1994, pp. 159-160.

14 Ibid, p. 200.
15 Ibid, p. 185.
16 Ibid, p. 185.
17 Ibid, pp. 93-113.

You might also like