You are on page 1of 12

Evaluation of the Claim that the Mind is like a Computer

Student: Alan Cummins 1165236 Lecturer: Dr. Rebecca Maguire Cognitive Psychology PSY281
INTRODUCTION

“We used to argue whether a machine could think. The answer is, ‘No’. What thinks is a total

circuit, including perhaps a computer, a man, and an environment. Similarly, we may

ask whether a brain can think, and again the answer will be, ‘No’. What thinks is a

brain inside a man who is part of a system which includes an environment”

Gregory Bateson(1972)

The question of whether a mind is analogous to a computer is a difficult and emotive subject.

Philosophical debate can be given to whether a computer can be human. However, this essay

focuses on a different aspect of the mind-computer correspondence. Namely that of whether a

mind can be seen to think like a computer. Are there comparable features in the process of

cognition that act in a logical and mathematical way much as computers does. Definition will

be given to the mind and to a computer and discussion given to the cognitive analogy of

mental processes being equivalent to a computer. The two main approaches to computational

cognitive research, symbolism and connectionism, are then discussed, followed by a

suggestion as to what the future holds.

WHAT IS BRAIN, WHAT IS COMPUTER

The mind can be considered to be an element, part, substance or process that reasons, thinks,

feels, wills, perceives and judges. It is the combination of both the conscious and unconscious

mental processes and activities. The mind has intellect and more importantly understanding

and comprehension of what it is doing. These processes acquire, share and use the

information in its environment to make sense of the world and take an appropriate action or

reaction dependent on the situation. Mental processes such as these are active, efficient and

accurate making use of top-down and bottom-up processing. The mind can be said to be

2 – Alan Cummins 1165236


computational in manner. It makes use of representations and algorithmic thought processes

to achieve an end goal. The computer can be described as an electronic device designed to

accept data and perform prescribed mathematical and logical operations at high speed and

output the results. The mind similarly but not exclusively can work in such a manner to

logically store and use information provided by external or internal stimuli to produce an end

result. The processing units available in the mind are structured similarly to computers at a

high level with processing units that have independent but related functionality that taken as a

whole achieve something greater than the sum of its parts. Computers can be thought to be

structured in a similar manner with processing units that take care of various components of

IO, memory retrieval and storage and functional processing. At this level of high-end

architecture the mind does appear to work like a computer with central nodes (specific brain

regions) interconnected via wires (neurons) with even electrical impulses being carried

throughout the system.

BASIS OF COGNITIVE ANALOGY

Taking ancient philosophy as a starting point we can consider two alternate view-points in

relation to how a mind comes to have the ability to process and comprehend at an early stage

of development. A computer has bios which gives it some initial concept of its components

and capabilities. This can be equated with Plato’s concept of rationalism where he considered

everyone to be born with innate concepts and knowledge. Aristotle, however, held sway with

empiricism where knowledge is acquired through experience alone. Tethered to Freud’s

concept of the formation of the psyche, that of the human mind being inextricably tied to

language, it could be argued that through software (language) and intelligent recursive

interaction (self-regulating feedback within an environment) the mind functions like a

computer system. Cognitive science seeks to understand the architecture, representations and

processes that make up the mind. There is a key assumption that the mind can be described

3 – Alan Cummins 1165236


and / or simulated in an abstract and rigorous way. This would suggest that the mind is

analogous to a computer system. It is only a matter of determining the means of building

representations and flexible rule-sets to mimic the mind. The mind has to build internal

representations of worldly objects in order to be able to retrieve and use them in later life or

moments. These representations can be equated to internal representations as in a computer

with the concept of logic gates that individually carry-out minor instruction sets but as a

whole function co-operatively to process information. (McCulloch & Pitts (1943)) It should

be noted that mental processes are abstract in nature whereas computers work from solid

theoretical based mechanisms. The mind as a computer is an analogy and should not be

considered anymore than that. The analogy is based on equating the brain and the mind to

hardware and software respectively. There is an indication here that you cannot have

cognitive processes without a means of effecting the environment and those cognitive

processes work within the confines of their environment in a cyclic manner. A computer

system cannot solely exist, except in a thought experiment, as a floating set of rules or

algorithms without any bounding constraints. It must be defined, confined and allowed to

control via its structure and environment (hardware).

Thinking involves manipulation of information

SYMBOLIC AND CONNECTIONIST APPROACHES TO COMPUTATIONAL

COGNITION

There are two main schools of thought in relation to computational cognitive theory, namely

symbolic and connectionist. Each lends both positive and negative impetus to the analogy of

the mind as a computer. Newell and Simon (1972) proposed a Physical Symbol System

Hypothesis which is the backbone to the symbolic approach. This suggests that all knowledge

can be expressed by symbols and that thinking involves manipulation of these symbols

4 – Alan Cummins 1165236


according to set rules. These rules and symbols have no relation to the underlying biological

(or machinery) that they reside on. So by association the mind could be considered to

function like a computer even if the underlying structure of mind and computers is different.

Broadbent (1958) suggests that cognition is purely a sequential series of processing steps.

Similarly Atkinson and Shiffrin (1968) proposed a multi-store model of memory that captures

stimuli, perceptual processes and attends to them and stores resultant information in short or

long term memory. These hypotheses are undeniably linked to the concept of a logical

sequential computing device much like any modern-day computer. Stimuli are recorded by

input devices (camera, input device, HCI), interpreted and stored in short term memory cache

or written out to long term memory. The analogy relies on a functionalist separation of mind

and body, the same operation can be realised in multiple ways physically but the end result is

the same. Knowledge consists of declarative facts, combined into meaningful chunks of

memory that can be stored explicitly. Schematic knowledge such as this does exist and has

been studied by Brewer and Treyens (1981) in relation to perception and memory. Procedural

rules (skills, heuristics) can then be applied to manipulate the knowledge stored. Andersson’s

Skill Acquisition Theory(1983) suggests that the human mind works in such a manner noting

that experts work in a proceduralisation of declarative to procedural knowledge and then a

compositional stage where multiple rules are combined into single rules. This ties to how

modern programming is designed and produced with an object orientated approach with

objects of a specific class tied together in functional or procedural methods using recursive,

looping constructs in combination with simple Boolean logical tests in the form of IF-THEN-

ELSE statements. Newell and Simon (1972) suggest that any system that can manipulate

symbols in intelligent but John Searle (1980) suggests that human thought has meaning. The

analogy breaks down somewhat at this point because as humans we attribute meaning to

information while computers can be seen to be brute-force serial processors that do not

5 – Alan Cummins 1165236


consider their actions but simply follow basic rule-sets. Many thought experiments such as

the Turing Test (Turing (1950)) and Searle’s Chinese Room show the failing of the symbolic

approach and in turn the failings of the mind-computer analogy. A new approach is required

to more closely mimic the functionality and cognitive processing of the mind. The human

mind is not serial, nor does it possess all knowledge as propositional symbols, namely where

do concepts such as emotion and feeling reside in the knowledge representation framework.

A human mind is also slower with respect to the speed of neurons, as Sternberg (1999)

discussed, in comparison to the interconnected electron pathways of the computer.

Connectionism as McClelland (1986), Hebb (1949) and Rosenblatt (1958) hypothesised is

required to understand the structure as well as the functionality of the mind and its relation to

computational systems. The human brain is hugely parallel in nature using a distributed

network that fires electrical and chemical impulses across synaptic gaps based on a pattern of

excitatory activation via neurons. There is no one central executive ignoring the brain as a

whole but rather as a set of interconnected sub-systems. Computer systems increasingly act in

a similar manner making use of the power of distributed computing to break down and

achieve results. At a lower level computer systems apply the concept of elementary units

programmatically working as a whole via inhibition and excitation to trigger an event or

result dependent on a threshold level. Unlike the systematic approach no set pattern is

provided but only a means of tracing a path through the elementary elements. Some must be

given to Schacter (1989) and the concept of single events which seem to contradict this

multiple coordinated affect that governs connectionist systems. That is the brain can work in

both a symbolic and connectionist manner as required. Rules are seen not as explicit but as

shifting sets of weighted pathways. There is a biological similarity between the structure of

the mind and the structure of these Artificial Neural Networks (ANN). Just as the human

mind grows and increases in ability through experience so does the ANN. Chase and Ericsson

6 – Alan Cummins 1165236


(1981) has described how rather than expertise and skill being tied exclusively to innate

ability that it is rather practise and re-enforcement of skill-sets that allows experts to out-

perform those around them. ANNs can be seen to abide by this rule as they increase their

knowledge and efficiency of pathways through repeated tracing from problem to solution.

NETTalk by Sejinowski and Rosenberg (1987) is a good example of such a network in use as

a model of speech production. However Ratcliff (1990) has noted that the human mind can

quickly unlearn established patterns of connections which is unlike a connectionist network

that retains such patterns. Another positive against the mind-computer analogy is that the

mind can still function even if some memories are lost or incomplete. Sternberg (1999) points

out that equally ANN have this ability of content addressability and graceful degradation

when damage has occurred. Connectionist networks can be seen to work in a similar fashion

to that of of the mind and memory recency effects such as Calkins (1894) where the more

recent the memory the easier it is to recall it. Similarly in connectionist networks if a pathway

has only recently been travelled it will be take this efficient route again whereas after a period

the strength of the pathway will have diminished and therefore take longer to re-establish.

COMPARISON OF SYMBOLIC AND CONNECTIONIST APPROACHES

Having discussed the major tenants of the computer as a model of the human mind we shall

now compare and contrast them within the realms of our discussion. Evidential reports, as

noted above, suggests that knowledge used by the mind is implicit and explicit in nature

while in a symbolic computer system the knowledge is hard-wired and explicit in comparison

with the implicit knowledge of connectionist networks. Representation in symbolic systems

are localised and processed in a serial manner, much like the proposed memory models of the

human mind. However it is apparent that the mind can and does work on distributed inputs in

a parallel manner. Connectionist systems can and do mimic this action. Learning in the

human mind is flexible according to Ohlsson’s (1992) Representational Change theory where

7 – Alan Cummins 1165236


constraints can be relaxed, information elaborated on and re-encoded. The symbolic approach

cannot replicate this while the connectionist approach can do so. Both approaches have

strengths and weaknesses when held up against a mind-computer analogy. The mind has

cognitively penetrable processes, or those which are conscious, and cognitively impenetrable

processes, those that are unconscious. Symbolic systems excel at explaining conscious

processes while connectionist systems are good at aiding understanding of unconscious

processes.

NEW DIRECTIONS

Both approaches above go some way to describing the functionality of the mind as that of a

computational system. However both fail in terms of discussing how the structure and the

information around every organism are tied to behaviour, action, capabilities and perception.

The mind is embodied; it does not exist in a vacuumed world. Pecher and Zwaan (1987)

have made reference to this fact. Symbols cannot, as in the symbolic system, be considered as

discrete and random entities but rather as symbols making reference to objects around them.

Hannard (1990) has discussed this notion as that of symbolic grounding. Phenomenology and

Freudian theorists such as Lacan discuss the idea of signifiers and the signified, the notion of

concepts in of themselves but also importantly as part of a greater sense of being. Intelligence

in terms of a computer system generally boils down to a number of calculations possible in a

given second (MIPS)or in a specific domain such as the robot Shakey, developed by Stanford

University in the late 60s, early 70s. Dreyfus (1992) discusses how formal representations are

not required for intelligence and that everything is based on interaction between the human

and the world. Computers are extremely limited in this respect. Brooks (1987) points out that

the many means of representing knowledge within a computer system does not come close to

truly representing what an object is. This is a problem known as Transduction. The human

mind works by using minimal representation and making use of sensation and perception to

8 – Alan Cummins 1165236


make sense of the world around it. Everything has a perceptual affordance which should be

used. The mind is far more robust than computer systems as they currently stand with an

ability to adapt and evolve under its environment. It is also able o react immediately to its

situation whereas computer systems are bound by a sense-plan-act loop. Computer systems

will continue to develop and must try to capture a sense of embodiment, situatedness and do

so via a bottom-up design, working on achieving simple tasks rather than try to complete a

total mind system. Evolution of such systems could then in analogous way to how the human

brain was formed begin to evolve and perhaps might one day make a switch from a

Dwarwinian to Lamarckian theory of evolution or one of evolution without biology. It is,

however, clear that computers are making a paradigm shift from pure logic engines to those

that closely mimic the cognitive capabilities of the human mind and so in future years it will

become difficult to know if the mind is like a computer or is it that the computer evolved into

a mind.

9 – Alan Cummins 1165236


References

Anderson, J.R. (1983). The architecture of cognition. Harvard, MA: Havard University Press.

Anderson, J.R. (1993). Rules of the mind. Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

Atkinson, R.C. & Shiffrin, R.M. (1968). Human memory: a proposed system and its control

processes. In K.W. Spence and J.T. Spence (eds.) The Psychology of Learning and

Motivation Vol. 2., Academic Press, London.

Bateson, Gregory (1972). Steps to an Ecology of Mind: Collected Essays in Anthropology,

Psychiatry, Evolution and Epistemology. University of Chicago Press.

Brooks, R.A. (1987). Intelligence without representation. Artificial Intelligence, Vol. 47, pp.

139-159.

Brewer, W.F., & Treyens, J.C. (1981). Role of schemata in memory for places. Cognitive

Psychology, Vol. 13, pp. 207-230.

Calkins, M.W. (1894). Association. Psychological Review, Vol. 1, pp. 476-483.

Chase, W.G., & Ericsson, K.A. (1981). Skill and working memory. In G.H. Bower (ed), The

psychology of learning and motivation, Vol. 16, pp. 1-58. New York Academic Press.

Ericsson, K.A., & Lehmann, A.C. (1996). Expert and exceptional performance: Evidence on

maximal adaptations on task constraints. Annual Review of Psychology, 47, 273-305.

Gibson, J.J. (1979). The ecological approach to visual perception. Boston: Houghton Mifflin.

Hanard, S. (1990). The Symbol Grounding Problem. Physica D, Vol. 42. Pp. 335-346.

Hebb, D.O. (1949). The Organisation of Behaviour. New York: John Wiley.

10 – Alan Cummins 1165236


McClelland, J.L., Rumelhart, D.E., & The PDP Research Group (1986). Parallel distributed

processing: Vol. 2. Psychological and biological models. Cambridge, MA: MIT Press.

McCulloch, W.S., & Pitts, W.H. (1943). A logical calculus of the ideas immanent in nervous

activity. Bulltein of Mathematical Biophysics, Vol. 5, pp. 115-133.

Newell, A., & Simon, H.A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice

Ohlsson, S. (1992). Information processing explanations of insight and related phenomena. In

M.T. Keane & K.J. Gilhooly (Eds.) Advances in the psychology of thinking. London:

Harvester Wheatsheaf.

Pecher D. & Zwaan, R.A. (1987). Grounding Cognition: The role of perception and action in

memory, language and thought. New York: Cambridge University Press.

Ratcliff, R. (1990). Connectionist Models of Recognition Memory: Constraints Imposed by

Learning and Forgetting Functions. Psychological Review, Vol. 97(2), pp. 285-308.

Rosenblatt, F. (1958). The Perceptron: A Probalistic Model for Information Storage and

Organisation in the Brain. Psychological Review, Vol. 65, No. 6, pp. 386-408,

November, 1958. Lancaster, PA and Washington, DC: American Psychological

Association, 1958.

Schacter, D.L. (1987). Implicit memory: History and current status. Journal of Experimental

Psychology: Learning, Memory and Cognition, 13, 501-518.

Searle, J. (1980). Minds, Brains and Programs. The Behavioural and Brain Sciences, Vol. 3,

pp. 417-424.

Sejinowski, T.J., & Rosenberg, C.R. (1987). Parallel networks that learn to produce English

text. Complex Systems, Vol. 1, pp. 145-168.

11 – Alan Cummins 1165236


Sternberg, R.J. (1999). Creativity and intelligence. In R.J. Sternberg (Ed.), Handbook of

creativity. New York: Cambridge University Press.

Turing, A. (1950). Computing machinery and intelligence. Mind LIX (236), October 1950,

pp. 433-460.

12 – Alan Cummins 1165236

You might also like