Professional Documents
Culture Documents
ID
1. SILESHI
NIBRET
0503390
2. TAMIRAT
TINKO
0503405
3. ZERTHUN
HABTE
0503484
4. SERAWIT
BEYENE
0503381
5. SOLOMON TILAYE
0503399
Abstract
This paper reviews common-sense definitions of intelligence; motivates the research in artificial
intelligence (AI) that is aimed at design and analysis of programs and computers that model
minds/brains; lays out the fundamental guiding hypothesis of AI; reviews the historical
development of AI as a scientist and engineering discipline; explores the relationship of AI to
other disciplines. Research on artificial intelligence in the last two decades has greatly improved
performance of both manufacturing and service systems. Currently, there is a dire need for an
article that presents a holistic literature survey of worldwide, theoretical frameworks and
practical experiences in the field of artificial intelligence. This paper reports the state-of the-art
on artificial intelligence in an integrated, concise, and elegantly distilled manner to show the
experiences in the field. In particular, this paper provides a broad review of recent developments
within the field of artificial intelligence (AI) and its applications. The work is targeted at new
entrants to the artificial intelligence field. It also reminds the experienced researchers about some
of the issue they have known. Hopefully this discussion will not only provide a useful context for
the technical material that follows but also convey a sense of what scientists, engineers,
mathematicians, and philosophers who have been drawn to the field find exciting about AI.
1.Introduction
In the 21st, century artificial intelligence (AI) has become an important area of research in virtually
all fields: engineering, science, education, medicine, business, accounting, finance, marketing,
economics, stock market and law, among others [1]. The field of AI has grown enormously to the
extent that tracking proliferation of studies becomes a difficult task. Apart from the application of
AI to the fields mentioned above, studies have been segregated into many areas with each of these
springing up as individual fields of knowledge [1].
information), reasoning (using the rules to reach approximate or definite conclusions), and selfcorrection [8]. Particular applications of AI include expert systems, speech recognition and
machine vision.
AI was coined by John McCarthy, an American computer scientist, in 1956 at The Dartmouth
Conference where the discipline was born. Today, it is an umbrella term that encompasses
everything from robotic process automation to actual robotics. It has gained prominence recently
due, in part, to big data, or the increase in speed, size and variety of data businesses are now
collecting [2]. AI can perform tasks such as identifying patterns in the data more efficiently than
humans, enabling businesses to gain more insight out of their data.
Artificial intelligence (AI) is an area of computer science that emphasizes the creation of intelligent
machines that work and react like humans [7]. Some of the activities computers with artificial
intelligence are designed for include:
Speech recognition
Learning
Planning
Problem solving
Research associated with artificial intelligence is highly technical and specialized. The core
problems of artificial intelligence include programming computers for certain traits such as:
Knowledge
Reasoning
Problem solving
Perception
Learning
Planning
Ability to manipulate and move objects
Knowledge engineering is a core part of AI research. Machines can often act and react like humans
only if they have abundant information relating to the world [9]. Artificial intelligence must have
4
access to objects, categories, properties and relations between all of them to implement knowledge
engineering. Initiating common sense, reasoning and problem-solving power in machines is a
difficult and tedious approach [4].
Machine learning is another core part of AI. Learning without any kind of supervision requires an
ability to identify patterns in streams of inputs, whereas learning with adequate supervision
involves classification and numerical regressions [6]. Classification determines the category an
object belongs to and regression deals with obtaining a set of numerical input or output examples,
thereby discovering functions enabling the generation of suitable outputs from respective inputs.
Mathematical analysis of machine learning algorithms and their performance is a well-defined
branch of theoretical computer science often referred to as computational learning theory.
Machine perception deals with the capability to use sensory inputs to deduce the different aspects
of the world, while computer vision is the power to analyze visual inputs with few sub-problems
such as facial, object and speech recognition [12].
Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as
object manipulation and navigation, along with sub-problems of localization, motion planning and
mapping.
McCarthy, Marvin Minsky, Oliver Selfridge, Ray Solomonoff, Trenchard More, Claude Shannon,
Nathan Rochester, Arthur Samuel, Allen Newell, and Herbert Simon. Some of these researchers
went on to open centers of AI research around the world, such as at Massachusetts Institute of
Technology , Stanford, Edinburgh and Carnegie Mellon University.
Two main approaches were developed for general AI; the top down approach which started with
the higher-level functions and implemented those, and the bottom up approach which looked at
the neuron level and worked up to create higher level functions. By 1956, Allen Newell [4] had
developed the Logic Theorist, a theorem-proving program.
In the following years, several programs and methodologies were developed; General Problem
Solver 1959, Geometry Theorem Prover 1958, STRIPS 1971, Oettingers Virtual Mall
1952, natural language processing implemented in the Eliza program in 1966, SHRDLU 1973,
expert systems leading to Deep Blue 1997, and some of the earlier versions of embodied
intelligence such as Herbert, Toto, and Genghis by Brooks, 1987 which roamed the
laboratories at MIT.
By the 1980s AI researchers were beginning to understand that creating artificial intelligence was
a lot more complicated than first thought. Given this, Brooks came to believe that the
way forward in consciousness was for researchers to focus on creating individual modules based
on different aspects of the human brain, such as a planning module, a memory module etc., which
could later be combined together to create intelligence. In the recent past, with the improvement
of the technologies associated with computing and robots, there has been a broad-based attempt to
build embodied intelligences. But the peculiar nature of this field has resulted in the many attempts
being almost entirely unconnected. Because of the difficulty and lack of success in building
physical robots, there has been a tendency towards computer simulation, termed Artificial
General Intelligence where virtual agents in a virtual reality world attempt to achieve intelligent
behavior [12].
expert systems, genetic algorithms, systems, knowledge representation, machine learning, natural
language understanding, neural networks, theorem proving, constraint satisfaction, and theory of
computation [2]. Since many readers of this article may require a glance view of the AI field, the
author has utilized a flow diagram to illustrate the whole structure of this paper, and the
relationship among the diverse fields of AI, as presented in Figure 1[1]. What follows is a brief
discussion of some of the important areas of AI. These descriptions only account for a selected
number of areas.
2.1. Reasoning
The first major area considered here is that of reasoning. Research on reasoning has evolved from
the following dimensions: case-based, non-monotonic, model, qualitative, automated, spatial,
temporal and common sense.
For an illustrative example, the case-based reasoning (CBR) is briefly discussed. In CBR, a set of
cases stored in a case base is the primary source of knowledge. Cases represent specific experience
in a problem-solving domain, rather than general rules [10]. The main activities when solving
problems with cases are described in the case-based reasoning cycle. This cycle proposes the four
steps: relieve, reuse, revise and retain. First, the new problem to be solved must be formally
described as a case (new case). Then, a case that is similar to the current problem is retrieved from
the case base. The solution contained in this retrieved case is reused to solve the new problem with
a new solution obtained and presented to the user who can verify and possibly revise the solution.
The revised case (or the experience gained during the case-based problem solving process) is then
retained for future problem solving. Detailed information on dimensions or how they are related
could be obtained from the relevant sources listed in the references [10].
subsequently refined by playing a large number of games against itself and applying reinforcement
learning [10].
2.7 Robotics
Robotic navigation, at least in static environments, is largely solved. Current efforts consider how
to train a robot to interact with the world around it in generalizable and predictable ways. A natural
requirement that arises in interactive environments is manipulation, another topic of current
interest [5]. The deep learning revolution is only beginning to influence robotics, in large part
because it is far more difficult to acquire the large labeled data sets that have driven other learningbased areas of AI. Reinforcement learning (see above), which obviates the requirement of labeled
data,
may help bridge this gap but requires systems to be able to safely explore a policy space without
committing errors that harm the system itself or others. Advances in reliable machine perception,
including computer vision, force, and tactile perception, much of which will be driven by machine
learning, will continue to be key enablers to advancing the capabilities of robotics [6].
healthcare technologies is IBM Watson [12]. It understands natural language and is capable of
responding to questions asked of it. The system mines patient data and other available data sources
to form a hypothesis, which it then presents with a confidence scoring schema [13].
Other AI applications include chatbots, a computer program used online to answer questions and
assist customers, to help schedule follow-up appointments or aiding patients through the billing
process, and virtual health assistants that provide basic medical feedback.
3.2 Business
Robotic process automation is being applied to highly repetitive tasks normally performed by
humans. Machine learning algorithms are being integrated into analytics and CRM platforms to
uncover information on how to better serve customers. Chatbots have been incorporated into
websites to provide immediate service to customers. Automation of job positions has also become
a talking point among academics and IT consultancies such as Gartner and Forrester [13].
3.4 Education.
AI can automate grading, giving educators more time. AI can assess students and adapt to their
needs, helping them work at their own pace. AI tutors can provide additional support to students,
ensuring they stay on track. AI could change where and how students learn, perhaps even replacing
some teachers [12].
3.3 Finance.
AI applied to personal finance applications, such as Mint or Turbo Tax, is upending financial
institutions. Applications such as these could collect personal data and provide financial advice.
Other programs, IBM Watson being one, have been applied to the process of buying a home.
Today, software performs much of the trading on Wall Street [13].
3.4 Law.
The discovery process, sifting through of documents, in law is often overwhelming for humans.
Automating this process is a better use of time and a more efficient process [8]. Startups are also
building question-and-answer computer assistants that can sift programmed-to-answer questions
by examining the taxonomy and ontology associated with a database [15].
11
3.5 Manufacturing.
This is an area that has been at the forefront of incorporating robots into the workflow. Industrial
robots used to perform single tasks and were separated from human workers, but as the technology
advanced that changed.
3.7 Transportation
Many companies have been progressing quickly in this field with AI.Fuzzy logic controllers have
been developed for automatic gearboxes in automobiles. For example, the 2006 Audi TT, VW
Touareg and VW Caravell feature the DSP transmission which utilizes Fuzzy Logic. A number of
koda variants (koda Fabia) also currently include a Fuzzy Logic-based controller [3]. AI in
12
transportation is expected to provide safe, efficient, and reliable transportation while minimizing
the impact on the environment and communities [11]. The major challenge to developing this AI
is the fact that transportation systems are inherently complex systems involving a very large
number of components and different parties, each having different and often conflicting objectives.
method used to determine if a computer can actually think like a human, although the method is
controversial.
The second example is from Arend Hintze, an assistant professor of integrative biology and
computer science and engineering at Michigan State University. He categorizes AI into four types,
from the kind of AI systems that exist today to sentient systems, which do not yet exist. His
categories are as follows:
on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even
more powerful computer to actually beat Kasparov [14].
Similarly, Googles AlphaGo, which has beaten top human Go experts, cant evaluate all
potential future moves either. Its analysis method is more sophisticated than Deep Blues, using
a neural network to evaluate game developments.
These methods do improve the ability of AI systems to play specific games better, but they cant
be easily changed or applied to other situations. These computerized imaginations have no
concept of the wider world meaning they cant function beyond the specific tasks theyre
assigned and are easily fooled [13].
They cant interactively participate in the world, the way we imagine AI systems one day might.
Instead, these machines will behave exactly the same way every time they encounter the same
situation. This can be very good for ensuring an AI system is trustworthy: You want your
autonomous car to be a reliable driver. But its bad if we want machines to truly engage with,
and respond to, the world [7]. These simplest AI systems wont ever be bored, or interested, or
sad.
15
So how can we build AI systems that build full representations, remember their experiences and
learn how to handle new situations? Brooks was right in that it is very difficult to do this. My
own research into methods inspired by Darwinian evolution can start to make up for human
shortcomings by letting the machines build their own representations.
TYPE 4: SELF-AWARENESS
The final step of AI development is to build systems that can form representations about
themselves. Ultimately, we AI researchers will have to not only understand consciousness, but
build machines that have it.
This is, in a sense, an extension of the theory of mind possessed by Type III artificial
intelligences. Consciousness is also called self-awareness for a reason. (I want that item is a
very different statement from I know I want that item.) Conscious beings are aware of
16
themselves, know about their internal states, and are able to predict feelings of others [14]. We
assume someone honking behind us in traffic is angry or impatient, because thats how we feel
when we honk at others. Without a theory of mind, we could not make those sorts of inferences.
While we are probably far from creating machines that are self-aware, we should focus our
efforts toward understanding memory, learning and the ability to base decisions on past
experiences [13]. This is an important step to understand human intelligence on its own. And it is
crucial if we want to design or evolve machines that are more than exceptional at classifying
what they see in front of them.
6 Examples of AI technology
17
tasks normally performed by humans [12]. RPA is different from IT automation in that it can
adapt to changing circumstances.
Machine learning is the science of getting a computer to act without programming. Deep
learning is a subset of machine learning that, in very simple terms, can be thought of as the
automation of predictive analytics [8]. There are three types of machine learning
algorithms: supervised learning, in which data sets are labeled so that patterns can be
detected and used to label new data sets; unsupervised learning, in which data sets aren't
labeled and are sorted according to similarities or differences; and reinforcement learning, in
which data sets aren't labeled but, after performing an action or several actions, the AI
system is given feedback.
Machine vision is the science of making computers see. Machine vision captures and
analyzes visual information using a camera, analog-to-digital conversion and digital signal
processing. It is often compared to human [13]. eyesight, but machine vision isn't bound by
biology and can be programmed to see through walls, for example. It is used in a range of
applications from signature identification to medical image analysis. Computer vision, which
is focused on machine-based image processing, is often conflated with machine vision.
Natural language processing (NLP) is the processing of human -- and not computer -language by a computer program. One of the older and best known examples of NLP is spam
detection, which looks at the subject line and the text of an email and decides if it's junk.
Current approaches to NLP are based on machine learning. NLP tasks include text
translation, sentiment analysis and speech recognition [9].
Robotics is a field of engineering focused on the design and manufacturing of robots. Robots
are often used to perform tasks that are difficult for humans to perform or perform
consistently [15]. They are used in assembly lines for car production or by NASA to move
large objects in space. More recently, researchers are using machine learning to build robots
that can interact in social settings.
18
CONCLUSION
The field of artificial intelligence gives the ability to the machines to think analytically, using
concepts. Tremendous contribution to the various areas has been made by the Artificial
Intelligence techniques from the last 2 decades. Artificial Intelligence will continue to play an
increasingly important role in the various fields.
This review has not attempted to detail all the literature in the area but to report mainly the most
recent work, particularly in the area of embodied AI. There is a major field of agent based
programs, many of them commercial, exemplified by The World of Warcraft. This has barely
been touched. The disparate nature of the reported work makes it very difficult to grasp or perhaps
makes it unnecessary to grasp. Perhaps the only two concepts which have been shared between
researchers are Baars Global Workspace Theory and the agent-based model, advanced
independently by Brooks and Minsky. A curious aspect of the literature is the very large
preponderance of proposed schemes over schemes actually implemented. Practitioners in the field
shy away from actually building robots, whether from considerations of cost or from a lack of
expertise in the area. Having digested all of these reported efforts, two basic conclusions must be
drawn; firstly, the researcher is free to go forward unfettered because there is no existing
formalism in the field. Secondly, the achievements of the field, attended as they are by a 33
million-fold (Moores law) improvement in computing, are disappointing - the field is a long way
from producing a robot which approaches the intelligence and functionality.
19
References
[1] S.A Oke, A Literature Review on Artificial Intelligence, Journal of Information and
Management Sciences,Vol 19,pp 535-570,2008.
[2] E. L. Thorndike, "Fundamentals of learning", Columbia University Teacher College, New
York, 1932.
[3] A. M. Turing, "Computing machinery and intelligence", Mind, vol.59, pp. 433-460, 1950.
[4] A. Newell and H. A. Simon, "The logic theory machine a complex information processing
system", The Rand Corporation Santa Monica, available at http://shelf1.library.cmu.edu/IMLS/
MindModels/logictheorymachine.pdf,1956.
[5] O. L. Hebb, "The organisation of behaviour", Wiley, New York,1949.
[6] Vasant Honavar, Artificial Intelligence Research Laboratory, 4th International Conference
On Autonomous Robots and Agents,Iowa state university, Ames, Iowa, 2016.
[7] Halpern, J. Y. and Pucella, R., A logic for reasoning about upper probabilities, Journal of
Artificial Intelligence Research, Vol. 17, pp.s 57-81, 2002.
[8] Reiter, E., Sripada, S. G. and Robertson, R., Acquiring correct knowledge for natural
language generation, Journal of Artificial Intelligence Research, Vol. 18, pp.491-516, 2003.
[9] One Hundred Year Study on Artificial Intelligence (AI100), Stanford University, 2012,
[online] Available: https://ai100.stanford.edu [Accessed: January 14,2017].
[10] Artificial Intelligence intelligent systems Tutorials Point, [online]
Available:www.tutorialspoint.com [Accessed January 13,2017].
[11] Artificial-Intelligence-AI Techopedia,[Online] Available:
21