You are on page 1of 2

The Underlying Assumption

One of the assumptions underlying work in Artificial Intelligence is that intelligent behavior canbe achieved through the manipulation of symbol structures (representing bits of knowledge). These symbols can be represented on any medium - in principle, we could develop a (very slow)intelligent machine made out of empty beer cans (plus something to move the beer cansaround). However, computers provide the representational and reasoning powers whereby wemight realistically expect to make progress towards automating intelligent behavior.So, the main question now is how we can represent knowledge as symbol structures and usethat knowledge to intelligently solve problems. The next few lectures will concentrate on howwe represent knowledge, using particular knowledge representation languages .These are high-level representation formalisms, and can in principle be implemented using a whole range of programming languages. The remaining lectures will concentrate more on how we solveproblems, using general knowledge of problem solving and domain knowledge.In AI, the crucial thing about knowledge representation languages is that they should support inference . We can't represent explicitly everything that the system might ever need to know -some things should be left implicit, to be deduced by the system as and when needed inproblem solving. For example if we were representing facts about a particular CS3 Honoursstudent (say Fred Bloggs) we don't want to have to explicitly record the fact that Fred'sstudying AI. All CS3 Honours students are, so we should be able to deduce it. Similarly, youprobably wouldn't explicitly represent the fact that I'm not the president of the United States,or that I have an office in Lilybank Gardens. You can deduce these things from your generalknowledge about the world.Representing everything explicitly would be extremely wasteful of memory. For our CS3example, we'd have 100 statements representing the fact that each student studies AI. Most of these facts would never be used. However, if we DO need to know if Fred Bloggs studies AI wewant to be able to get at that information efficiently. We also would like to be able to makemore complex inferences - maybe that Fred should be attending a lecture at 12am on Tuesday Feb 9th, so won't be able to have a supervision then. However, there is a tradeoff betweeninferential power (what we can infer) and inferential efficiency (how quickly we can infer it), sowe may choose to have a language where simple inferences can be made quickly, thoughcomplex ones are not possible.In general, a good knowledge representation language should have at least the following features: It should allow you to express the knowledge you wish to represent in the language. Forexample, suppose you want to represent the fact that ``Richard knows how old he is''. Thisturns out to be difficult to express in some languages. It should allow new knowledge to be inferred from a basic set of facts, as discussed above. It should be clear, and have a well-defined syntax and semantics

. We want to know whatthe allowable expressions are in the language, and what they mean. Otherwise we won't besure if our inferences are correct, or what the results mean. For example, if we have a factgrey (elephant) we want to know whether it means all elephants are grey, some particularone is grey, or what.Some of these features

You might also like