You are on page 1of 10

TERM PAPER

OF
Artificial intelligence

Real time expert system

Submitted By:

Avneet Singh

RC1804A09

ACKNOWLEDGMENT
Subject: Thanks giving n acknowledgment to all thos who helped me in this project

I want to express deep thanks to first almighty god for giving me strength,patience n all that thing which
helped me in successful completion of this TermPaper

Secondly I want to thank my mam ms taranpreet mam for her help in my project

Lastly my friends and parents for their everlasting help

Regards

Avneet Singh

RC1804A09
INDEX

 About Real Time Expert System

 What are Expert system

 Applications of Expert Systems

 Advanteges Of Expert system

 Disadvantages of Expert systems

 Advances of Real Time Expert System

 BIBLIOGRAPHY
About

Many "real-time" expert systems are 'soft' real-time systems, in thatthey claim to be
fast. A 'hard' real-time system would have features that guarantee a response within a
fixed amount of real-time (e.g.,bounded computation, not just a fast match-recognize-
act cycle).Systems like G2 use event-driven processing (restricting certain rules to
execute only when specific WM elements change in a particular way)as a method of
limiting forward chaining.
Expert system

An expert system is software that attempts to provide an answer to a problem, or clarify uncertainties where
normally one or more human experts would need to be consulted. Expert systems are most common in a specific
problem domain, and is a traditional application and/or subfield of artificial intelligence (AI). A wide variety of
methods can be used to simulate the performance of the expert; however, common to most or all are: 1) the creation
of a knowledge base which uses some knowledge representation structure to capture the knowledge of the Subject
Matter Expert (SME); 2) a process of gathering that knowledge from the SME and codifying it according to the
structure, which is called knowledge engineering; and 3) once the system is developed, it is placed in the same real
world problem solving situation as the human SME, typically as an aid to human workers or as a supplement to
some information system. Expert systems may or may not have learning components.

Expert systems were introduced by researchers in the Stanford Heuristic Programming Project, including the "father
of expert systems" Edward Feigenbaum, with the Dendral and Mycin systems. Principal contributors to the
technology were Bruce Buchanan, Edward Shortliffe, Randall Davis, William vanMelle, Carli Scott, and others at
Stanford. Expert systems were among the first truly successful forms of AI software. [1] [2] [3] [4] [5] [6]

The topic of expert systems also has connections to general systems theory, operations research, business process
reengineering, and various topics in applied mathematics and management science.

Application

Expert systems are designed to facilitate tasks in the fields of accounting, medicine, process control, financial
service, production, human resources, among others. Typically, the problem area is complex enough that a more
simple traditional algorithm cannot provide a proper solution. The foundation of a successful expert system depends
on a series of technical procedures and development that may be designed by technicians and related experts. As
such, expert systems do not typically provide a definitive answer, but provide probabilistic recommendations.

An example of the application of expert systems in the financial field is expert systems for mortgages. Loan
departments are interested in expert systems for mortgages because of the growing cost of labour, which makes the
handling and acceptance of relatively small loans less profitable. They also see a possibility for standardised,
efficient handling of mortgage loan by applying expert systems, appreciating that for the acceptance of mortgages
there are hard and fast rules which do not always exist with other types of loans. Another common application in the
financial area for expert systems are in trading recommendations in various marketplaces. These markets involve
numerous variables and human emotions which may be impossible to deterministically characterize, thus expert
systems based on the rules of thumb from experts and simulation data are used. Expert system of this type can range
from ones providing regional retail recommendations, like Wishabi, to ones used to assist monetary decisions by
financial institutions and governments.

While expert systems have distinguished themselves in AI research in finding practical application, their application
has been limited. Expert systems are notoriously narrow in their domain of knowledge — as an amusing example, a
researcher used the "skin disease" expert system to diagnose his rustbucket car as likely to have developed measles
— and the systems are thus prone to making errors that humans would easily spot. Additionally, once some of the
mystique had worn off, most programmers realized that simple expert systems were essentially just slightly more
elaborate versions of the decision logic they had already been using. Therefore, some of the techniques of expert
systems can now be found in most complex programs without drawing much recognition.

An example and a good demonstration of the limitations of an expert system is the Windows operating system
troubleshooting software located in the "help" section in the taskbar menu. Obtaining technical operating system
support is often difficult for individuals not closely involved with the development of the operating system.
Microsoft has designed their expert system to provide solutions, advice, and suggestions to common errors
encountered while using their operating systems.

Another 1970s and 1980s application of expert systems, which we today would simply call AI, was in computer
games. For example, the computer baseball games Earl Weaver Baseball and Tony La Russa Baseball each had
highly detailed simulations of the game strategies of those two baseball managers. When a human played the game
against the computer, the computer queried the Earl Weaver or Tony La Russa Expert System for a decision on what
strategy to follow. Even those choices where some randomness was part of the natural system (such as when to
throw a surprise pitch-out to try to trick a runner trying to steal a base) were decided based on probabilities supplied
by Weaver or La Russa. Today we would simply say that "the game's AI provided the opposing manager's strategy."

Advantages and disadvantages

• Compared to traditional programming techniques, expert-system approaches provide the added flexibility
(and hence easier modifiability) with the ability to model rules as data rather than as code. In situations
where an organization's IT department is overwhelmed by a software-development backlog, rule-engines,
by facilitating turnaround, provide a means that can allow organizations to adapt more readily to changing
needs.

• In practice, modern expert-system technology is employed as an adjunct to traditional programming


techniques, and this hybrid approach allows the combination of the strengths of both approaches. Thus, rule
engines allow control through programs (and user interfaces) written in a traditional language, and also
incorporate necessary functionality such as inter-operability with existing database technology.

Disadvantages

• The Garbage In, Garbage Out (GIGO) phenomenon: A system that uses expert-system technology provides
no guarantee about the quality of the rules on which it operates. All self-designated "experts" are not
necessarily so, and one notable challenge in expert system design is in getting a system to recognize the
limits to its knowledge.
• An expert system or rule-based approach is not optimal for all problems, and considerable knowledge is
required so as to not misapply the systems.
• Ease of rule creation and rule modification can be double-edged. A system can be sabotaged by a non-
knowledgeable user who can easily add worthless rules or rules that conflict with existing ones. Reasons for
the failure of many systems include the absence of (or neglect to employ diligently) facilities for system
audit, detection of possible conflict, and rule lifecycle management (e.g. version control, or thorough testing
before deployment). The problems to be addressed here are as much technological as organizational.
ADVANCES IN RTXPS

■ The Workshop on Advances in Real-Time Expert System Technologies was held on 3 August 1992 in conjunction
with the Tenth European Conference on AI. Participation was limited to invited researchers only. The workshop
focused on practical problems occur-ring during the implementation of real-time expert systems. In this respect,
different industrial applica-tions were discussed. The debate cov-ered a wide range of applications, such as
qualitative simulation and anytime algorithms for real-time process con-trol. The workshop showed that real-time
expert system techniques are get-ting more attention, even in Europe.

The Workshop on Advances in Real-Time Expert System Tech-nologies was held in conjunc-tion with the
Tenth European Con-ference on Artificial Intelligence in Vienna on 3–7 August 1992.1 The workshop and
conference were orga-nized by the European Coordinating Committee for Artificial Intelligence and hosted by the
Austrian Society for Artificial Intelligence. A summary, my personal impressions, and future directions are
presented here.Expert systems are technologies to support human reasoning by formaliz-ing expert knowledge so
that mecha-nized reasoning methods can be applied. In real-time systems, these reasoning methods must be reactive
to external events and have to abide by stringent timing requirements. This behavior, known as timeliness, is
infrequently achieved in expert sys-tems.One possible solution to the prob-lem is the use of anytime algorithms.

Anytime algorithms are algorithms

whose output improves over time. Instead of creating the ultimate cor-rect solution, such algorithms try to get better
and better results the longer they run. Participants argued that anytime algorithms are useful in real-time systems and
will play a more important role in the future. Real-time systems might be parti-tioned into a control part and reason-
ing parts. If the reasoning parts fol-low the notion of anytime algo-rithms, they could be interrupted by the control
part as soon as the envi-ronment forces an interrupt. Howev-er, it has been shown that pattern-matching algorithms
such as RETE or TREAT cannot be interrupted at any position for consistency reasons; therefore, the question of
how to intermingle heuristics with anytime algorithms is not solved sufficiently and requires deeper analysis.

Generate-and-test methods can be regarded as anytime algorithms. One theoretical paper by Carl-Helmut Coulon
from the German National Research Institute for Computer Sci-ence addressed the problem of defin-ing a utility
function for generate-and-test methods. These methods are con-ceived as incremental nonheuristic algorithms that
can be called repeated-ly to generate and test a hypothesis. As a stop criterion for a generate-and-test system, a lower
bound on the number of solutions to be produced and an upper bound on the time to be spent were defined. Four
strategies for utility management were presented and compared. Although the four strategies were ad hoc strategies,
the analysis gave deeper insight into the

RETE network. The idea is based on the fact that left and right memories in two-input nodes can be partitioned into
intervals. By estimating these intervals statically (before run time) and calculating the number of tokens to be
matched throughout the whole network, an upper bound for the complete number of matched tokens can be given.
This upper bound is much closer to reality than the theo-retical worst-case complexity of the match would lead us to
expect. On average, the upper-bound method predicts five times more matches than actually performed during run
time. The method can be seen as a general-ization of the unique attribute tech-nique used in SOAR systems. It was
mentioned that the time needed for performing the prediction can be rather large, sometimes twice the time needed
for the match. It can be con-cluded that the upper bound is only a first step in the direction of run-time prediction for
production systems because it is unable to predict run time for several recognize-act cycles.
Another interesting discussion at the workshop was on the efficiency of managing temporal facts in rule-based
systems. Expert systems that make use of temporal reasoning require some form of automatic life-time management
for temporal facts. Whereas others have used inefficient truth maintenance systems for this purpose, a pragmatic
approach has been taken in the language PAMELA-C. The proposed reasoning scheme is able to handle only events
that have occurred at a discrete time point in the past, and no hypothetical reason ing about future events is
supported. Only relative temporal dependencies between events can be specified by the user. It can be considered an
intelligent garbage collector for RETE networks handling temporal facts.

Three of the application-oriented papers presented deserve special attention. First, Jean-Luc Dormoy from EDF
Research and Development Center presented an architecture for building real- time systems from models by using a
model-compiling technique. He claimed that classical model-based reasoning techniques are of no use to real-time
problems because of their low performance. Therefore, an architecture called KSE, which has been used in nuclear
plants, has been developed. The aim of KSE is to causally explain unde-sired events in a plant and provide the
operators with a description of the plant operational state. To achieve high performance, knowl-edge-compiling
techniques were used to automatically generate a sufficient-ly fast expert system from a model-based description of
the plant.

KSE contains three large knowledge chunks. The first chunk, which com-prises 12,000 components and 150,000
attribute-value pairs, repre-sents the model of the plant. This description of components and rela-tions between them
is modeled in an object-oriented way. The second chunk is a description of causal rela-tionships between
components. This description is represented by 250 so-called principles that take the form of a subset of predicate
calculus causal implications. The third chunk is a simple logical model of the intended operation of the system,
namely, deducing a description of the plant state from the instrument data that is as complete as possible, generating
possible assumptions, and removing the assumptions. The logical model consists of six general rules.

The knowledge compiler of KSE transforms the logical model into first-order production rules. These rules are then
compiled into zero-order rules that can be seen as hard-wired if-then statements. In the current version of the plant,
47,000 zero-order rules have been generated. It was claimed that

the worst-case run time of the whole system is no more than five seconds. However, the compilation process needs
more than 10 hours.

Second, Monika Pfau-Wagenbauer and Thomas Brunner from Siemens Austria discussed the functioning of an
expert system acting as part of a supervisory control and data-acquisi-tion (SCADA) system for the public utility
board of Singapore, controlling its 22-kV distribution network. The expert system is an operator support tool that
diagnoses network distur-bances and device malfunctions. The SCADA system provides the expert system with
relevant (filtered) process data and meets hard real-time dead-lines. In contrast, the expert system runs on separate
hardware and does not guarantee response time.

The topological data representation in the knowledge base is modeled in an object-oriented way. There are
hierarchical diagnosis levels using heuristic rules as well as compiled model-based knowledge. Based on a
dynamically determined time win-dow, it is decided when the main diagnosis process has to start. During the
passing of a time window, distur-bances are gathered from the SCADA system, and a prediagnosis method uses
relevant component models to diagnose correct protection system behavior. The observed behavior is compared to
the correct behavior of the models, and conclusions about the correct behavior can be drawn. In this way, correct
behavior assump-tions increase monotonically. These assumptions are used later by the main diagnosis, which is
structured as a hierarchy of different rule classes. The rule classes comprise 190 rules. There are about 25,000
objects in the system. The average reasoning time is about 5 seconds, which is satisfactory because the SCADA
system itself needs 8 minutes to scan and filter all 10,000 peripheral events.Third, a paper by Zsusa Csaki and Karl
Hangos from the Computer and Automation Institute of the Hungari-an Academy of Sciences described lessons
learned from using qualitative simulation in a chemical plant. The main message was that qualitative simulation–
based advice generation

Workshops

for operators seems to be too complex to guarantee stringent timing require-ments. To overcome the problem, only
small, intermediate simulation steps should be performed, and these steps should be guided by the opera-tor
according to his/her heuristic knowledge. In this way, the simulator is supported by choosing the most interesting
branch in the system’s behavior tree.

The workshop showed that real-time expert system techniques are getting more attention, even in Europe.
Timeliness and reactivity will play a role not only in so-called low-level tasks but also in planning and reason-ing.
Future systems will be heteroge-neous in nature, comprising different reasoning methods as anytime algo-rithms,
probabilistic reasoning, and subsymbolic techniques. The issue is how to put it all together. In gluing different
techniques together, the engineering task gets complex, and as the size of the systems grow, the cor-rectness issue
becomes more impor-tant. Using design and verification methods might be one answer to dealing with complex
heterogeneous architectures. Thus, AI and software engineering must definitely come together. Acknowledgments I
would like to thank Wolfgang Nejdl, the coorganizer of the workshop, for his comments and Peter Patel-Schneider
for proofreading.
BIBLIOGRAPHY

• en.wikipedia.org/wiki/Expert_systems

• www.google.com

• www.scribble.com

You might also like