You are on page 1of 16

Heuristic Search

A heuristic is a method that

• might not always find the best solution


• But is guaranteed to find a good solution in reasonable time.

• By sacrificing completeness it increases efficiency.


• Useful in solving tough problems which
o Could not be solved any other way.
o Solutions take an infinite time or very long time to compute.

The classic example of heuristic search methods is the traveling salesman


problem.

And-Or Graphs

Useful for certain problems where

• The solution involves decomposing the problem into smaller problems.


• We then solve these smaller problems.

Here the alternatives often involve branches where some or all must be satisfied
before we can progress.

For example if I want to learn to play a Frank Zappa guitar solo I could
(Fig. 2.2.1)

• Transcribe it from the CD. OR


• Buy the ``Frank Zappa Guitar Book'' AND Read it from there.
Note the use of arcs to indicate that one or more nodes must all be satisfied
before the parent node is achieved. To find solutions using an And-Or GRAPH
the best first algorithm is used as a basis with a modification to handle the set
of nodes linked by the AND factor.

Constraint Satisfaction

• The general problem is to find a solution that satisfies a set of


constraints.
• Heuristics used not to estimate the distance to the goal but to decide
what node to expand nest.
• Examples of this technique are design problem, labeling graphs, robot
path planning and crypt arithmetic puzzles (see last year).

Algorithm:
1. Propagate available constraints:
o Open all objects that must be assigned values in a complete
solution.
o Repeat until inconsistency or all objects assigned valid values:
 Select an object and strengthen as much as possible the set
of constraints that apply to object.
 If set of constraints different from previous set then open
all objects that share any of these constraints.
 Remove selected object.
2. If union of constraints discovered above defines a solution return
solution.
3. If union of constraints discovered above defines a contradiction return
failure

1. Make a guess in order to proceed. Repeat until a solution is found or all


possible solutions exhausted:
o Select an object with a no assigned value and try to strengthen its
constraints.
o Recursively invoke constraint satisfaction with the current set of
constraints plus the selected strengthening constraint.

Extending Semantic Nets

Here we will consider some extensions to Semantic nets that overcome a few
problems (see Exercises) or extend their expression of knowledge.

Partitioned Networks Partitioned Semantic Networks allow for:

• Propositions to be made without commitment to truth.


• Expressions to be quantified.

Basic idea: Break network into spaces which consist of groups of nodes and
arcs and regard each space as a node.

Consider the following: Andrew believes that the earth is flat. We can encode
the proposition the earth is flat in a space and within it have nodes and arcs the
represent the fact (Fig. 15). We can the have nodes and arcs to link
this space the rest of the network to represent Andrew's

belief.

Fig. 12 Partitioned network

Now consider the quantified expression: Every parent loves their child To
represent this we:

• Create a general statement, GS, special class.


• Make node g an instance of GS.
• Every element will have at least 2 attributes:
o A form that states which relation is being asserted.
o one or more for all ( ) or exists ( ) connections -- these represent
universally quantifiable variables in such
statements e.g. x, y in parent(x) :child(y) loves(x,y)

Here we have to construct two spaces one for each x,y. NOTE: We can
express variables as existentially qualified variables and express the event
of love having an agent p and receiver b for every parent p which could
simplify the network (See Exercises).

Also If we change the sentence to Every parent loves child then the node of the
object being acted on (the child) lies outside the form of the general statement.
Thus it is not viewed as an existentially qualified variable whose value may
depend on the agent. (See Exercises and Rich and Knight Book for examples of
this) So we could construct a partitioned network as in Fig. 16
Frames

Frames can also be regarded as an extension to Semantic nets. Indeed it is not


clear where the distinction between a semantic net and a frame ends. Semantic
nets initially we used to represent labeled connections between objects. As
tasks became more complex the representation needs to be more structured.
The more structured the system it becomes more beneficial to use frames.
A frame is a collection of attributes or slots and associated values that describe
some real world entity. Frames on their own are not particularly helpful but
frame systems are a powerful way of encoding information to support
reasoning. Set theory provides a good basis for understanding frame systems.
Each frame represents:

• a class (set), or
• An instance (an element of a class).

Scripts

A script is a structure that prescribes a set of circumstances which could be


expected to follow on from one another.

It is similar to a thought sequence or a chain of situations which could be


anticipated.
It could be considered to consist of a number of slots or frames but with more
specialized roles.

Scripts are beneficial because:

• Events tend to occur in known runs or patterns.


• Causal relationships between events exist.
• Entry conditions exist which allow an event to take place
• Prerequisites exist upon events taking place. E.g. when a student
progresses through a degree scheme or when a purchaser buys a house.

The components of a script include:

Entry Conditions
-- These must be satisfied before events in the script can occur.
Results
-- Conditions that will be true after events in script occur.
Props
-- Slots representing objects involved in events.
Roles
-- Persons involved in the events.
Track
-- Variations on the script. Different tracks may share components of the
same script.
Scenes
-- The sequence of events that occur. Events are represented
in conceptual dependency form.

Scripts are useful in describing certain situations such as robbing a bank. This
might involve:

• Getting a gun.
• Hold up a bank.
• Escape with the money.

Here the Props might be

• Gun, G.
• Loot, L.
• Bag, B
• Get away car, C.
The Roles might be:

• Robber, S.
• Cashier, M.
• Bank Manager, O.
• Policeman, P.

The Entry Conditions might be:

• S is poor.
• S is destitute.

The Results might be:

• S has more money.


• O is angry.
• M is in a state of shock.
• P is shot.

There are 3 scenes: obtaining the gun, robbing the bank and the getaway.

The full Script could be described in Fig 19.


Fig. 12 Simplified Bank Robbing Script

Some additional points to note on Scripts:

• If a particular script is to be applied it must be activated and the


activating depends on its significance.
• If a topic is mentioned in passing then a pointer to that script could be
held.
• If the topic is important then the script should be opened.
• The danger lies in having too many active scripts much as one might
have too many windows open on the screen or too many recursive calls
in a program.
• Provided events follow a known trail we can use scripts to represent the
actions involved and use them to answer detailed questions.
• Different trails may be allowed for different outcomes of Scripts (e.g. the
bank robbery goes wrong).

Advantages of Scripts:

• Ability to predict events.


• A single coherent interpretation may be build up from a collection of
observations.

Disadvantages:

• Less general than frames.


• May not be suitable to represent all kinds of knowledge.

Non-Monotonic Reasoning

Predicate logic and the inferences we perform on it is an example


of monotonic reasoning.

In monotonic reasoning if we enlarge at set of axioms we cannot retract any


existing assertions or axioms.

Humans do not adhere to this monotonic structure when reasoning:

• we need to jump to conclusions in order to plan and, more basically,


survive.
o we cannot anticipate all possible outcomes of our plan.
o we must make assumptions about things we do not specifically
know about.

Default reasoning

This is a very common from of non-monotonic reasoning. Here we want to


draw conclusions based on what is most likely to be true.

We have already seen examples of this and possible ways to represent this
knowledge.
We will discuss two approaches to do this:

• Non-Monotonic logic.
• Default logic.

DO NOT get confused about the label Non-Monotonic and Default being
applied to reasoning and a particular logic. Non-Monotonic reasoning is
generic descriptions of a class of reasoning. Non-Monotonic logic is a specific
theory. The same goes for Default reasoning and Default logic.

Non-Monotonic Logic

This is basically an extension of first-order predicate logic to include


a modal operator, M. The purpose of this is to allow for consistency.

For example: : plays_instrument(x) improvises(x) jazz_musician(x)

states that for all x is x plays an instrument and if the fact that x can improvise
is consistent with all other knowledge then we can conclude that x is a jazz
musician.

How do we define consistency?

One common solution (consistent with PROLOG notation) is to show that


fact P is true attempt to prove . If we fail we may say that P is consistent
(since is false).

However consider the famous set of assertions relating to President Nixon.

: Republican(x) Pacifist(x) Pacifist(x)

: Quaker(x) Pacifist(x) Pacifist(x)

Now this states that Quakers tend to be pacifists and Republicans tend not to
be.

BUT Nixon was both a Quaker and a Republican so we could assert:

Quaker(Nixon)

Republican(Nixon)
This now leads to our total knowledge becoming inconsistent.

Default Logic

Default logic introduces a new inference rule:

which states if A is deducible and it is consistent to assume B then conclude C.

Now this is similar to Non-monotonic logic but there are some distinctions:

• New inference rules are used for computing the set of plausible
extensions. So in the Nixon example above Default logic can support
both assertions since is does not say anything about how choose between
them -- it will depend on the inference being made.
• In Default logic any non monotonic expressions are rules of inference
rather than expressions.

Implementations: Truth Maintenance Systems

Due to Lecture time limitation. This topic is not dealt with in any great depth.
Please refer to the further reading section.

A variety of Truth Maintenance Systems (TMS) have been developed as a


means of implementing Non-Monotonic Reasoning Systems.

Basically TMSs:

• all do some form of dependency directed backtracking


• Assertions are connected via a network of dependencies.

Justification-Based Truth Maintenance Systems (JTMS)

• This is a simple TMS in that it does not know anything about the
structure of the assertions themselves.
• Each supported belief (assertion) in has a justification.
• Each justification has two parts:
o An IN-List -- which supports beliefs held.
o An OUT-List -- which supports beliefs not held.

• An assertion is connected to its justification by an arrow.


• One assertion can feed another justification thus creating the network.
• Assertions may be labeled with a belief status.
• An assertion is valid if every assertion in the IN-List is believed and
none in the OUT-List are believed.
• An assertion is non-monotonic is the OUT-List is not empty or if any
assertion in the IN-List is non-monotonic.

Fig. 20 A JTMS Assertion

Logic-Based Truth Maintenance Systems (LTMS)

Similar to JTMS except:

• Nodes (assertions) assume no relationships among them except ones


explicitly stated in justifications.
• JTMS can represent P and P simultaneously. An LTMS would throw a
contradiction here.
• If this happens network has to be reconstructed.

Assumption-Based Truth Maintenance Systems (ATMS)

• JTMS and LTMS pursue a single line of reasoning at a time and


backtrack (dependency-directed) when needed -- depth first search.
• ATMS maintain alternative paths in parallel -- breadth-first search
• Backtracking is avoided at the expense of maintaining multiple contexts.
• However as reasoning proceeds contradictions arise and the ATMS can
be pruned
o Simply find assertion with no valid justification.
Bayes Theorem

• This states:

o This reads that given some evidence E then probability that


hypothesis is true is equal to the ratio of the probability
that E will be true given times the a priori evidence on the
probability of and the sum of the probability of E over the set
of all hypotheses times the probability of these hypotheses.
o The set of all hypotheses must be mutually exclusive and
exhaustive.
o Thus to find if we examine medical evidence to diagnose an
illness. We must know all the prior probabilities of find symptom
and also the probability of having an illness based on certain
symptoms being observed.

Bayesian statistics lie at the heart of most statistical reasoning systems.

How is Bays’ theorem exploited?

• The key is to formulate problem correctly:

P(A|B) states the probability of A given only B's evidence. If there is


other relevant evidence then it must also be considered.

Herein lies a problem:

• All events must be mutually exclusive. However in real world problems


events are not generally unrelated. For example in diagnosing measles,
the symptoms of spots and a fever are related. This means that
computing the conditional probabilities gets complex.

In general if a prior evidence, p and some new observation, N then


computing
grows exponentially for large sets of p

• All events must be exhaustive. This means that in order to compute all
probabilities the set of possible events must be closed. Thus if new
information arises the set must be created afresh and all probabilities
recalculated.

Thus Simple Bays’ rule-based systems are not suitable for uncertain reasoning.

• Knowledge acquisition is very hard.


• Too many probabilities needed -- too large a storage space.
• Computation time is too large.
• Updating new information is difficult and time consuming.
• Exceptions like ``none of the above'' cannot be represented.
• Humans are not very good probability estimators.

However, Bayesian statistics still provide the core to reasoning in many


uncertain reasoning systems with suitable enhancement to overcome the above
problems.

We will look at three broad categories:

• Certainty factors,
• Dempster-Shafer models,
• Bayesian networks.

Problem solving and reasoning

The following iterative steps occur in typical blackboard solving activities:

1. A knowledge source makes a change to a blackboard object.


2. Each knowledge source indicates the contribution it can make to the new
solution state -- either dynamically or a priori based.
3. A scheduler decides a focus of attention.

3
1. A control module prepares the focus of attention for execution:
1. If the focus of attention is a knowledge source -- a blackboard
object is chosen to serve as its context (knowledge centered
scheduling).
2. If the focus of attention is a blackboard object -- a knowledge
source to process that object is chosen and instantiated with the
object as its context (event centered scheduling).
3. If both a blackboard object and a knowledge source are the focus
of attention an instance of the knowledge source is made ready for
execution with the object as its context.

1. The problem solving process continues until a knowledge source has


indicated that it should stop:
1. either an acceptable solution has been found, or
2. Lack of knowledge or data prohibits further action.

Blocks World Planning Examples

What is the Blocks World? -- The world consists of:

• A flat surface such as a tabletop


• An adequate set of identical blocks which are identified by letters.
• The blocks can be stacked one on one to form towers of apparently
unlimited height.
• The stacking is achieved using a robot arm which has fundamental
operations and states which can be assessed using logic and combined
using logical operations.
• The robot can hold one block at a time and only one block can be moved
at a time.

We shall use the four actions:

UNSTACK (A, B)
-- pick up clear block A from block B;
STACK (A, B)
-- Place block A using the arm onto clear block B;
PICKUP (A)
-- lift clear block A with the empty arm;
PUTDOWN (A)
-- Place the held block A onto a free space on the table.

And the five predicates:

ON (A, B)
-- Block A is on block B.
ONTABLE (A)
-- Block A is on the table.
CLEAR (A)
-- Block A has nothing on it.
HOLDING (A)
-- The arm holds block A.
ARMEMPTY
-- The arm holds nothing.

Using logic but not logical notation we can say that if the arm is holding a
block it is not empty If block A is on the table it is not on any other block If
block A is on block B, block B is not clear.

Why Use the Blocks world as an example?

The blocks world is chosen because:

• it is sufficiently simple and well behaved.


• easily understood
• yet still provides a good sample environment to study planning:
o problems can be broken into nearly distinct sub problems
o we can show how partial solutions need to be combined to form a
realistic complete solution.

You might also like