Professional Documents
Culture Documents
Wodzisaw Duch
Department of Informatics,
Nicolaus Copernicus University, Toru, PL
Dept. of Comp. Science, School of Comp.
Engineering, Nanyang Technological University,
Singapore
Google: Duch
Cognitive Science
The Central Paradox of Cognition: how can the structure and
meaning, expressed in symbols and ideas at the mental
level, result from numerical processing at the brain level?
to behavior.
10-2 m, mesoscopic networks: self-organization, sensory and motor
maps, population coding, continuous activity models,
mean field theories, brain imaging, EEG, MEG, fMRI.
10-1 m, transcortical networks, large brain structures: simplified
models of cortex, limbic structures, subcortical nuclei,
integration of
functions, concept formation, sensorimotor
integration,
neuropsychology, computational psychiatry ...
Brain-like computing
Brain states are physical, spatio-temporal states of neural tissue.
I can see, hear and feel only my brain states! Ex: change blindness.
Cognitive processes operate on highly processed sensory data.
Redness, sweetness, itching, pain ... are all physical states of brain
tissue.
In contrast to computer registers,
brain states are dynamical, and
thus contain in themselves many
associations, relations.
Inner world is real! Mind is based
on relations of brains states.
Computers and robots do not
have an equivalent of such WM.
P-spaces
Psychological spaces:
K. Lewin, The conceptual representation and the
measurement of psychological forces (1938), cognitive
dynamic movement in phenomenological space.
P-space definition
P-space: region in which we may place and classify
elements of our experience, constructed and evolving,
a space without distance, divided by dichotomies.
P-spaces should have (Shepard 1957-2001):
minimal dimensionality;
distances that monotonically decrease with
increasing similarity.
This may be achieved using multi-dimensional non-metric scaling
(MDS), reproducing similarity relations in low-dimensional spaces.
Can one describe the state of mind in similar way?
Laws of generalization
Shepard (1987), Universal law of generalization.
Tenenbaum, Griffith (2001), Bayesian framework unifying settheoretic approach (introduced by Tversky 1977) with Shepard ideas.
Generalization gradients tend to fall off approximately exponentially
with distance in an appropriately scaled psychological space.
Object recognition
Natural object recognition (S. Edelman, 1997)
Second-order similarity in low-dimensional (<300) space is sufficient.
Population of columns as weak classifiers working in chorus - stacking.
P s | r P s | r1 , r2 ..rN
P( s ) P ri | s
i 1
N
s'
i 1
Semantic memory
Autoassociative network, developing internal
representations (McClleland-Naughton-OReilly, 1995).
After training distance relations between different
categories are displayed in a dendrogram, showing
natural similarities/ clusters.
MDS mappings: min (Rij rij)2
from internal neural activations;
from original data in the P-space - hypercube,
dimensions
for predicates, ex. robin(x) {0, 1};
from psychological experiments, similarity matrices;
show similar configurations.
Neural distances
Activations of groups of neurons presented in activation space
define similarity relations in geometrical model.
From neurodynamics
neurodynamics to
to P-spaces
P-spaces
Modeling input/output relations with some internal parameters.
Walter Freeman: model of olfaction in rabbits, 5 types of odors, 5
types of behavior, very complex model in between.
Simplified models: H. Liljestrm.
More neurodynamics
Amit group, 1997-2001,
simplified spiking neuron
models of column activity
during learning.
Stage 1: single columns
respond to some feature.
Stage 2: several columns
respond to different features.
Stage 3: correlated activity
of many columns appears.
Formation of new attractors
=>formation of mind objects.
PDF: p(activity of columns|
given presented features)
Category learning
Large field, many models.
Classical experiments: Shepard, Hovland and Jenkins (1961),
replicated by Nosofsky et al. (1994)
Problems of increasing complexity; results determined by logical rules.
3 binary-valued dimensions:
shape (square/triangle), color (black/white), size (large/small).
4 objects in each of the two categories presented during learning.
Type I - categorization using one dimension only.
Type II - two dim. are relevant (XOR problem).
Types III, IV, and V - intermediate complexity between Type II - VI.
All 3 dimensions relevant, "single dimension plus exception" type.
Type VI - most complex, 3 dimensions relevant,
logic = enumerate stimuli in each of the categories.
Difficulty (number of errors made): Type I < II < III ~ IV ~ V < VI
Canonical neurodynamics.
What happens in the brain during category learning?
Complex neurodynamics <=> simplest, canonical dynamics.
For all logical functions one may write corresponding equations.
For XOR (type II problems) equations are:
V x, y, z 3xyz
1 2
2
2 2
x
V
3 yz x 2 y 2 z 2 x
x
V
y&
3xz x 2 y 2 z 2 y
y
V
z&
3xy x 2 y 2 z 2 z
z
x&
Learning
Point of view
Neurocognitive
Psychology
Probing
Point of view
Neurocognitive
Psychology
Automatization of actions
Learning: initially conscious involvement (large
brain areas active) in the end becomes automatic,
subconscious, intuitive (well-localized activity).
Formation of new resonant states - attractors in
brain dynamics during learning => neural models.
Reinforcement learning requires observing and evaluating how
successful are the actions that the brain has planned and is executing.
Relating current performance to memorized episodes of performance
requires evaluation + comparison (Gray subiculum), followed by
emotional reactions that provide reinforcement via dopamine release,
facilitating rapid learning of specialized neural modules.
Working memory is essential to perform such complex task.
Errors are painfully conscious, and should be remembered.
Conscious experiences provide reinforcement (is this main function of
consciousness?); there is no transfer from conscious to subconscious.
g p ( X; P ) g p , i xi ; Pi p
p
i 1
F ( X; P ) W p g p X; P p
p 1
Intuitive thinking
Question in qualitative physics:
if R2 increases, R1 and Vt are constant,
what will happen with current and V1, V2 ?
Geometric representation of facts:
+ increasing, 0 constant, - decreasing.
Ohms law V=IR; Kirhoffs V=V1+V2.
True (I-,V-,R0), (I+,V+,R0), false (I+,V-,R0).
5 laws: 3 Ohms & 2 Kirhoffs laws.
All laws A=B+C, A=BC , A-1=B-1+C-1,
have identical geometric interpretation!
13 true, 14 false facts; simple P-space,
complex neurodynamics.
Intuitive reasoning
5 laws are simultaneously fulfilled, all have the same representation:
5
Question: If R2=+, R1=0 and V =0, what can be said about I, V1, V2 ?
Find missing value giving F(V=0, R, I,V1, V2, R1=0, R2=+) >0
Suppose that variable X = +, is it possible?
Not, if F(V=0, R, I,V1, V2, R1=0, R2=+) =0, i.e. one law is not fulfilled.
If nothing is known 111 consistent combinations out of 2187 (5%) exist.
Intuitive reasoning, no manipulation
of symbols; heuristics: select
variable giving unique answer.
Soft constraints or semi-quantitative
=> small |FSM(X)| values.
S (0) X inp ;
S&(t ) S M ( S ; t ) 1 g M S ; t (t )
where g(x) controls the sticking and
(t) is a noise + external forces term.
Mind state has inertia and momentum;
transition probabilities between mind objects
should be fitted to transition prob. between
corresponding attractors of neurodynamics
(QM-like formalism).
Primary mind objects - from sensory data.
Secondary mind objects - abstract categories.
Some connections
Geometric/dynamical ideas related to mind may be found in many fields:
Neuroscience:
D. Marr (1970) probabilistic landscape.
C.H. Anderson, D.C. van Essen (1994): Superior Colliculus PDF maps
S. Edelman: neural spaces, object recognition, global representation space
approximates the Cartesian product of spaces that code object fragments,
representation of similarities is sufficient.
Psychology:
K. Levin, psychological forces.
G. Kelly, Personal Construct Psychology.
R. Shepard, universal invariant laws.
P. Johnson-Laird, mind models.
Folk psychology: to put in mind, to have in mind, to keep in mind
(mindmap), to make up one's mind, be of one mind ... (space).
More connections
AI: problem spaces - reasoning, problem solving, SOAR, ACT-R,
little work on continuous mappings (MacLennan) instead of symbols.
Engineering: system identification, internal models inferred from input/output
observations this may be done without any parametric assumptions if a
number of identical neural modules are used!
Philosophy:
P. Grdenfors, conceptual spaces
R.F. Port, T.van Gelder, ed. Mind as motion (MIT Press 1995)
Linguistics:
G. Fauconnier, Mental Spaces (Cambridge U.P. 1994).
Mental spaces and non-classical feature spaces.
J. Elman, Language as a dynamical system; J. Feldman neural basis;
Stream of thoughts, sentence as a trajectory in P-space.
Psycholinguistics: T. Landauer, S. Dumais, Latent Semantic Analysis,
Psych. Rev. (1997) Semantic for 60 k words corpus requires about 300 dim.
Conclusions
Complex neurodynamics => dynamics in P-spaces.
Low-dimensional representation of mental events.
Is this a good bridge between mind and brain?
Psychological interpretations may be illusory!
Useful applications of the static Platonic model.
Open questions:
High-dimensional P-spaces with Finsler geometry needed for visualization
of mind events - will such model be still understandable?
Mathematical characterization of mind space? Many choices.
Challenges: simulations of brains may lead to mind functions, but without
conceptual understanding;
neurodynamical models => P-spaces for monkey categorization.
At the end of the road: physics-like theory of events in mental spaces?