You are on page 1of 51

CogSci 131

Computational models of cognition

Tom Griffiths

CogSci 131
Computational models of cognition

Tom Griffiths

Computation

Cognition

input

output

input

output

Computational modeling
Look for principles that characterize
both computation and cognition
computation
input

computation
output

input

output

Two goals
Cognition:
explain human cognition (and behavior) in
terms of the underlying computation

Computation:
gain insight into how to solve some
challenging computational problems

Computational problems
Easy:
arithmetic, algebra, chess

Difficult:
learning and using language
sophisticated senses: vision, hearing
similarity and categorization
representing the structure of the world
scientific investigation

human cognition sets the standard

What this class is about


Developing formal accounts of cognition
how people think and learn, and how we can
make machines do the same thing
Using tools from:
artificial intelligence
machine learning
statistics

How is that different from CS 188?


CS 188 focuses on computer science,
while we will focus on cognition
less programming
more human data
emphasis on conceptual questions about
where formal approaches fit into cognition

If you have already taken CS 188, some


ideas will be familiar, but the context will
probably be quite different

Who should take this class


Advanced students in cognitive science,
psychology, and computer science
interested in modeling cognition
Prerequisites:
exposure to cognitive science (e.g. CogSci C1)
basic programming skills (e.g. CS 61A ,Eng 7)
comfort with calculus (e.g. Math 1A) and
discrete math (e.g. Math 55 or CS 70)

General strategy
Lots of material: breadth over depth
you can see whats out there, and choose
to explore things that interest you further

Learning by doing
focus on problem sets, no in-class tests

More math in class than you will be


expected to do in your problem sets

Website
http://bcourses.berkeley.edu
bCourses site has syllabus with links to
readings, copies of all handouts, slides,
and problem sets
You should have access if you are
enrolled, if not email me: tom_griffiths

Reading
This class is reading-intensive!
textbook: Russell & Norvig
(3rd or 2nd edition are okay)
many primary sources
all available via bCourses
interest in a reader?

Slides will be posted before class

Requirements
Seven problem sets (70%)
programming in Python and written answers
collaboration policy: discuss, dont write
first problem set released today!
later problem sets will have challenge
problems for extra credit

Take-home final (30%)


same kind of material as problem sets
no collaboration, limited time

Programming
All of the programming for the class (in the
problem sets) will be in Python
We are using iPython/Jupyter Notebooks
for the problem sets, hosted on a server
You will have access to the server if you
enrolled or joined the waitlist by Monday
More information on accessing problem
sets at the end of class today and in section

Sections
Sections are optional in general, but we
recommend you go next week to make
sure youre ready for the problem sets
The goal of section is to discuss and clarify
topics from class, answer your questions,
and help you with Python programming
(the exact balance varies week to week)

The waitlist
I want everybody who needs/wants to get
into the class to get into the class
At present this looks like it should almost
be possible so hang in there while we sort
out the details
Two new sections (107 and 108) were
added to accommodate more students

Meet the team


Five fantastic GSIs:
David Bourgin
Austin Chen
Dylan Daniels
Falk Lieder
Joshua Peterson

Questions?

Three approaches
Rules and symbols
Networks, features, and spaces
Probability and statistics

Three approaches
Rules and symbols
Networks, features, and spaces
Probability and statistics

Logic
All As are Bs
All Bs are Cs
All As are Cs
Aristotle
(384-322 BC)

The mathematics of reason

Thomas Hobbes
(1588-1679)

Rene Descartes
(1596-1650)

Gottfried Leibniz
(1646-1716)

Modern logic
PQ
P
Q

George Boole
(1816-1854)

Gottlob Frege
(1848-1925)

Computation

Alan Turing
(1912-1954)

Rules and symbols


Perhaps we can consider thought a set of
rules, applied to symbols
generating infinite possibilities with finite means
characterizing cognition as a formal system

This idea was applied to:


deductive reasoning (logic)
language (generative grammar)
problem solving and action (production systems)

Big question: what are the rules of cognition?

Computational problems
Easy:
arithmetic, algebra, chess

Difficult:
learning and using language
sophisticated senses: vision, hearing
similarity and categorization
representing the structure of the world
scientific investigation

human cognition sets the standard

Inductive problems
Drawing conclusions that are not fully
justified by the available data
e.g. detective work
In solving a problem of this sort, the
grand thing is to be able to reason
backward. That is a very useful
accomplishment, and a very easy one,
but people do not practice it much.

Much more challenging than deduction!

Challenges for symbolic approaches


Many human concepts have fuzzy boundaries
notions of similarity and typicality are hard to
reconcile with binary rules

Solving inductive problems requires dealing


with uncertainty and partial knowledge
Learning systems of rules and symbols is hard!
some people who think of human cognition in these
terms end up arguing against learning

Three approaches
Rules and symbols
Networks, features, and spaces
Probability and statistics

Similarity
What determines similarity?

Representations
What kind of representations are used
by the human mind?

Representations
How can we capture the meaning of words?

Semantic networks

Semantic spaces

Categorization

Computing with spaces


+1 = cat, -1 = dog

error:

E = ( y g(Wx))

cat

dog

x1

x2

x2

x1

perceptual features

y = g(Wx)

General-purpose learning mechanisms

E
<0
w ij

E
>0
w ij

E
=0
w ij

E (error)
E
w ij =
w ij
( is learning rate)

wij

General-purpose learning mechanisms


Artificial neural networks can represent any
continuous function
Simple algorithms for learning from data
fuzzy boundaries
effects of typicality

A way to explain how people could learn


things that look like rules and symbols
Big question: how much of cognition can
be explained by the input data?

Challenges for neural networks


Being able to learn anything can make it
harder to learn specific things
neural networks need lots of data
this is the bias-variance tradeoff

Neural networks are complex and hard


to understand like human cognition
not necessarily the clearest language for
explaining why people do what they do

Three approaches
Rules and symbols
Networks, features, and spaces
Probability and statistics

Probability

Gerolamo Cardano
(1501-1576)

Probability

Thomas Bayes
(1701-1763)

Pierre-Simon Laplace
(1749-1827)

Bayes theorem
How rational agents should update their
beliefs in the light of data

P(d | h)P(h)
P(h | d) =
P(d | h")P(h")
h" H

h: hypothesis
d: data

Cognition as statistical inference


Bayes theorem tells us how to combine
prior knowledge with data
a language for describing the constraints on
human inductive inference

Probabilistic approaches also tell us how to


make decisions and interact with others
Big question: what do the constraints on
human inductive inference look like?

Challenges for probabilistic approaches


Computing probabilities is hard how
could brains possibly do that?
How well do the rational solutions from
probability theory describe how people
think in everyday life?

Three approaches
Rules and symbols
Networks, features, and spaces
Probability and statistics

Levels of analysis
At what level should we try to model
human cognition?
computational
problem
algorithm
implementation

For Tuesday
Fill out the survey on bCourses!

Read Marr (1982) (also on bCourses)

You might also like