You are on page 1of 6

Exam for in4010tu Articial Intelligence Techniques

November 2010

This exam will test your knowledge and understanding of the material discussed in the rst period of the course Articial Intelligence Techniques. Using the book, lecture notes, or slides during the examination is not allowed. You will have 3 hours (from 14 till 17) to complete the exam. It has 4 questions, for a total of 59 points. Please don't include irrelevant information: you will be marked down for this. Before you hand in your answers, please check that you have put your name and student number on top of every sheet you hand in.

Questions

Question 1 (a) (3 points) Explain what a rational agent is.


Solution:

points

A rational agent is something that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome. (R & N) The beliefs of an agent, if they are to count as rational, should satisfy a number of conditions. The beliefs should be true (correspond to the actual state of the environment). Beliefs should be as complete as possible (as far as they are relevant). The goals that an agent adopts must conform with certain feasibility conditions. Goals should be consistent. The means should be available to achieve the goals of the agent. (Lecture notes) (b) (3 points) Explain what the cognitive modeling approach towards Articial Intelligence is. Provide two reasons why you think this approach is useful or not. The cognitive modelling approach towards AI aims to build agents that think exactly like humans. To this end, we need to know how human minds work (through introspection, psychological experiments, or brain imaging). Once we have a suciently precise theory of the mind, it becomes possible to express the theory as a computer program. If the program's input-output behaviour matches corresponding human behaviour, that is evidence that some of the program's mechanisms could also be operating in humans. Arguments pro usefulness: Cognitive modelling could help us better understand human thinking. Arguments con usefulness: There may be other ways of thinking than the human way that are more eective, more ecient, more feasible to implement etc.
Solution:

(c) (3 points) Name one dierence between the Turing Test approach and the Rational Agent approach towards Articial Intelligence. According to the Turing Test approach, agents should act indistinguishably from humans. According to the Rational Agent approach, agents should act rationally (i.e. get a good outcome). So agents that perform signicantly better than humans on a given task may fail the Turing Test but are perfectly rational.
Solution:

Question 2 22 points This question is about agent programs, Prolog, and Goal. Consider the agent program listed in Figure 1. The agent's task is to transport all cars from the north side of a river to the south side of that river by controlling a ferry. Cars can be boarded onto and disembarked from the ferry when the car is located at the same side of the river as the ferry. In your answers to the questions, do not introduce new predicates that have not been mentioned either in the program or this question. (a) (3 points) Provide a denition for the predicate transportedCars. transportedCars should follow if and only if all cars have been transported to the south side. (Hint: Use the forall(Cond1, Cond2) operator. As an example to illustrate the use of this operator, consider that the query: forall(on(X,ferry), car(X)) succeeds if all things that are on the ferry are cars.)
Solution:

transportedCars :- forall(car(X), at(X,south)).

(b) (3 points) Similarly, provide a denition for the predicate otherSide(X,Side) which holds if and only if X is a car or ferry and Side is the opposite side of where the car or ferry is located (i.e. south if X is at the north side, and vice versa).

ai Techniques
main:ferryAgent
{

Exam, page 2 of 5

3 November 2010

knowledge{
}

transportedCars :- ... . otherSide(X,Side) :- ... . side(north). side(south). car(ferrari). at(ferrari,north). car(toyota). at(toyota,north). car(opel). at(opel,north). at(ferry,north).

beliefs{

goals{ transportedCars. } program{ if goal(transportedCars), bel(at(X,north)) then board(X). if goal(transportedCars) then sail. if goal(transportedCars), bel(at(ferry,south)) then disembark.
}

actionspec{

board(X){ pre{ at(X,Y), side(Y), car(X), at(ferry,Y) } post{ not(at(X,Y)), on(X,ferry) } } sail{ pre{ at(ferry,X), side(X), otherSide(ferry,Y) } post{ ... } } disembark{ pre{ ... } post{ at(X,Y) , not(on(X,ferry)) } } }

Figure 1:

Goal Agent Program

otherSide(X,north) :- at(X,south). otherSide(X,south) :- at(X,north).


Solution:

(c) (4 points) List the actions that can be executed by the goal base listed in Figure 1.
board(ferrari) board(toyota) board(opel) sail
Solution:

Goal agent given the agent's belief and

(d) (3 points) Complete the action specication for the action sail.
Solution:

sail{ pre{ at(ferry,X), side(X), otherSide(ferry,Y) } post{ not(at(ferry,X)), at(ferry,Y) } }

ai Techniques

Exam, page 3 of 5

3 November 2010

(e) (3 points) Complete the action specication for the action disembark.
Solution:

disembark{ pre{ at(ferry,Y), on(X,ferry) } post{ at(X,Y), not(on(X,ferry)) } }

(f) (6 points) One of the desirable properties of the agent is that it gets each car to the other (south) side. For the agent listed, since we know there is a ferrari car at the north side, we can express this by:
bel(at(ferrari,south))

Explain why the agent satises this property, or does not satisfy it. You do not have to provide a listing of all traces but may provide an informal argument and describe a trace informally. It does not satisfy the property. Although the chance is very slim, the ferry may sail back and forth without ever picking up a car.
Solution:

Question 3 20 points Consider the following planning problem that involves a household robot that prepares dinner. Init(havePotatoes) Goal(tableLaid haveBakedPotatoes haveBakedFish) Action(bakePotatoes, Precond: havePotatoes ovenHot Effect: havePotatoes haveBakedPotatoes) Action(buyFish, Precond: Effect: haveFish) Action(cookFish, Precond: haveFish handsClean Effect: haveFish haveBakedFish) Action(heatOven, Precond: Effect: ovenHot) Action(layTable, Precond: handsClean Effect: tableLaid) Action(washHands, Precond: Effect: handsClean) (a) (15 points) Construct a solution for this planning problem using partial order planning.

ai Techniques

Exam, page 4 of 5

3 November 2010

Start
havePotatoes

washHands
handsClean

buyFish
haveFish

heatOven
ovenHot

handsClean

handsClean

haveFish

havePotatoes

ovenHot

layTable
tableLaid

cookFish
haveFish haveBakedFish

bakePotatoes
havePotatoes haveBakedPotatoes

tableLaid
Solution:

haveBakedFish

haveBakedPotatoes

Finish

(b) (5 points) Planners based on planning graphs use so-called mutex links to identify mutual exclusion between actions. A mutual exclusion relation between two actions holds if there are (i) inconsistent eects, (ii) interference, and/or (iii) competing needs between these actions. Explain each of these three conditions (i-iii), i.e. provide a description of what an inconsistent eect is, what interference is, and what competing needs are. Inconsistent eects: one action negates an eect of the other. Interference: one of the eects of one action is the negation of a precondition of the other. Competing needs: one of the preconditions of one action is mutually exclusive with a precondition of the other.
Solution:

Question 4 8 points (a) (3 points) Explain the dierence between active and passive sensors. Provide an example of a passive and of an active sensor. Passive sensors capture signals that are generated by other sources in the environment but do not themselves emit energy; a camera is an example of a passive sensor. Active sensors send energy into the environment and rely on the fact that this energy is reected back to the sensor; a sonar is an example of an active sensor. See page 973 in Russell and Norvig, 3rd edition.
Solution:

(b) (5 points) Explain why uncertainty, e.g. the fact that a robot does not know exactly where it is, which is a characteristic problem associated with robotics, complicates the robot motion planning

ai Techniques

Exam, page 5 of 5

3 November 2010

problem. Provide one simple solution.


Solution: The simple solution is to start planning from the most likely state. See Section 25.5 in the 3rd edition of Russell and Norvig.

End of exam

You might also like