You are on page 1of 15

Intelligent Agents

Definitions
Agent = entity that can perceive, through sensors, and act, through
effectors.
Example: the vacuum-cleaner agent
Determinist vs. stochastic
Determinist
If the environment is completely determined by the current state and the
agents action.
Example: our vacuum-cleaner agent.
Stochastic
The contrary of determinist.
Example: real world (e.g., driving a car).
Adding incertitude (stochasticity) in the vacuum agent:
Random emergence of dust.
Relative reliability of the sucking.

Simple reflex agents
Select their action according to the current percept (no memory).
Example of the vacuum:
Simple reflex agents: global architecture
Model-based reflex agents (internal state)
Goal-based agents (internal state)
Utility-based agents (internal state)
Learning agents (internal state)
Project: Implementing the vacuum-cleaner
Part 1: Building a flexible environment

Implement a performance-measuring environment simulator for the vacuum-


cleaner world depicted previously. Your implementation should be modular
so that the sensors, actuators, and environment characteristics (size, shape,
dirt placement) can change easily.
Project: Implementing the vacuum-cleaner
Part 2: Implementing a simple reflex agent

Implement a simple reflex agent for the vacuum environment. Run the
environment with this agent for all possible initial dirt configurations and
agent locations. Record the performance score for each configuration and the
overall average score.

Note: performance = number of cleaned cells after a determined number of


iterations (lets say number of iterations = total number of cells).
Project: Implementing the vacuum-cleaner
Part 3: Adding penalties

Consider a modified version of the previous implementation, in which the


agent is penalized one point for each movement:
Can a simple reflex agent be perfectly rational for this environment? Explain.
What about a reflex agent with state? Design such an agent.

Note: rational = optimized (always performing the appropriate action).


Project: Implementing the vacuum-cleaner
Part 4: Adding obstacles to the environment and states to the agent

Consider a modified version of the vacuum environment, in which the


geography of the environment its extent, boundaries, and obstacles is
unknown, as is the initial dirt configuration. The agent can go Up and Down as
well as Left and Right:
Can a simple reflex agent be perfectly rational for this environment? Explain.
Can a reflex agent with state outperform a simple reflex agent? Design such an agent and
measure its performance on several environments.
Project: Implementing the vacuum-cleaner
Part 5: Implementing a stochastic world and behavior

The vacuum environments in the preceding parts have all been deterministic.
Develop and discuss agent programs for each of the following stochastic
versions:
Murphys law: 25% of the time, the Suck action fails to clean the floor if it is dirty and
deposits dirt onto the floor if the floor is clean. How is your program affected if the dirt
sensor gives the wrong answer 10% of the time?
Small children: At each time step, each clean square has a 10% chance of becoming dirty.
Can you come up with a rational agent design for this case?

You might also like