You are on page 1of 7

AI Artificial Intelligence

Chapter 1 Introduction
1. What is AI?
2. What is the Turing Test?
a. Natural language processing
b. Knowledge representation
c. Automated reasoning
d. Machine learning
e. Computer vision
f. Robotics
3. Relations between physical stimuli and intelligence?
4. Thinking humanly, thinking rationally, and acting rationally.
5. Acting rationally: agents and rational agents (intelligent agents).
6. Perfect rationality and limited rationality.
7. Algorithms, incompleteness theorem, tractability, and NPcompleteness.
8. What can AI do today?
a. Robotic vehicles
b. Speech recognition
c. Autonomous planning and scheduling
d. Game playing
e. Spam fighting
f. Logistics planning
g. Robotics
h. Language translation

Chapter 2 Intelligent Agents


1. Intelligent agents
a. Environment, sensors and actuators.

b. Connection between environment, sensors and actuators.


2. Vacuum-cleaner example.
3. Rational agents
a. How the performance of an agent can be measured, such that
rationality or irrationality is perceived?!
4. Rational agent definition: For each possible percept sequence, a
rational agent should select an action that is expected to maximize
its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
5. Learning and autonomy
6. Task environment
a. PEAS
7. Fully observable vs partially observable task environments.
8. Single-agent vs multi-agent
a. Competitive vs cooperative
9. Deterministic vs stochastic
10.
Episodic vs sequential
11.
Static vs Dynamic
12.
Discrete vs continuous
13.
Known vs unknown (try random or the opposite)
14.
Reflex based agents
15.
Model based agents
16.
Goal based agents
17.
Learning agents
18.
Agent=program + architecture
19.
Atomic, factored, and structured agent.

Chapter 3 Solving problems by searching


1. Problem solving agents (goal based agents) structure is atomic at
most cases, but we have also some factored and structured agents
which are called planning agents.

2. Informed and uninformed searching algorithms.


3. For a problem to be solvable by a searching algorithm, it needs to
be Observable, deterministic, and known, hence a precise sequence
of actions is the goal of a search algorithm.
4. Search solution execution
5. A problem definition should have these parameters:
a. Initial state, e.g. In (Pristina).
b. A description of possible actions that the agent can undertake,
e.g. Go (Prizren), Go (Mitrovica), etc.
c. A transition model, e.g. Result (In (Pristina), Go (Prizren)) = In
(Prizren).
d. The goal state: checks if the actual state is the same as the
goal state.
e. Path cost: a measure usually accumulated in a variable called
sum.
6. Touring problems
a. TSP Traveling Salesman Problem
7. Searching for solution
a. Root equals the initial state
b. Branches imply actions
c. Nodes imply states
8. General searches do not avoid redundant paths, Graph searches do.
Algorithms that forget their history are doomed to repeat it,
therefore a so called explored set is a nice solution for this.
9. Data structures are needed to save states, parent, actions and path
cost:
a. LIFO queue
b. FIFO stack
c. Priority queue pops the element with the highest priority.
10.
Uninformed searches blind searches:
a. Breadth first search: Expands all the nodes in the hierarchy
one by one in a predefined order. Requires an enormous
amount of memory and time. FIFO is used for the frontier.
b. Uniform cost search: Expands the nodes with the lowest path
cost, hence priority queue data structure is used for the
frontier. 100 km to Pristina, 50 km to Prizren, it chooses
Prizren instead of Pristina.
c. Depth first search: Expands the deepest nodes in the current
frontier of the tree. Uses LIFO. It removes the nodes that do
not fulfill the goal state. A con of DFS is that it doesnt work for
infinite sate situations. One variant of depth first search that
requires less amount of memory is backtracking search.
d. Depth-limited search: A variant of DFS but with a predefined
depth limit.
e. Iterative deepening DFS: Combination between DLS and DFS,
starts with depth 0, 1, 2... Till a solution is found.

f. Bidirectional search: Two simultaneous searches, one starting


from the initial state and the other from the goal, hoping they
meet in the middle.
11.
Informed searches knowledge beyond the definition of the
problem. Best first search algorithms, they use a heuristic function f
(n) to evaluate the states, and start from the lowest one. Priority
queue is used as in uniform cost search, besides UCS starts with the
lowest path cost.
a. Greedy best first search: tries to expand the nodes with the
highest value of f (n). Uses priority queue.
b. A* (A-star) search: starts at the lowest f (n) which is generated
via f (n) = g (n) + h (n) where g (n) represents the cost to
reach the node, and h (n) the cost to go form the node to the
goal.
c. A* deepening iterative search.
d. BFSI memory bounded search.
12.
Heuristic functions.

Chapter 4 Beyond classical search

1. Local
a.
b.
c.

2.

3.

4.

5.

6.
7.

search algorithms
They operate in a current node
They use very little memory, usually a constant one.
They find a solution in a reasonable time in infinite or
continuous spaces, compared to the systematic search
algorithms that are unsuitable.
State space landscape
a. Global minimum
b. Global maximum
c. Complete algorithm
d. Optimal algorithm
Hill climbing
a. Hill climbing steepest ascend incomplete goes only uphill.
b. Alias greedy local search
c. When hill climb gets stuck?
i. Local maxima
ii. Ridges
iii. Plateau
d. Stochastic Hill Climbing
e. First choice Hill Climbing
f. Random Restart Hill Climbing at most cases complete
Simulated annealing
a. Combination between random walk (which is complete but
inefficient) and hill climb.
b. Gradient descent ping pong ball crevice example.
Local beam search
a. Generates k states rather than just one. K states are generated
randomly, at each step the successors of all k states are
generated, if one of them is the goal, algorithm halts, otherwise
chooses the best k states form all the states. In a local beam
search, useful information is passed among the parallel search
threads.
Stochastic local beam search
a. It chooses randomly form k states based in probability.
Genetic algorithm
a. A variant of stochastic beam search.
b. Population k states generated at the beginning.
c. Individual part of population.
d. Fitness function returns higher values for better states.
e. Selection made in probability means.
f. Crossover
g. Mutation

Chapter 6 Constraint satisfaction problem


CSP

Chapter 7 Logical
1. Knowledge based agents.
a.

You might also like