Professional Documents
Culture Documents
I. I NTRODUCTION
It is an intriguing problem to design a software agent that
can find a reasonable amount of action-sequence in order to
reach a particular goal state from an initial state. In general,
search is a process of examining different possible sequences
of actions that yield to a desired state, and choosing the best
action-sequence to realize when a similar query is done in
the future.
Almost all the search algorithms possess the following el-
ements and functionalities: initial state,goal state, successor
function, goal test, and path cost.
These are the common attribute and functionalities that
the algorithms tested in this study, namely depth-first
search(memoizing, iterative deepening), breadth-first search,
A-star search (with manhattan distance, euclidean distance,
and misplaced tiles heuristics).
II. E XPERIMENTAL S ETUP
In order to represent the state of the puzzle a generic data
structure is being utilized 1.
As it is used in Node data structure, object oriented
programming principles are used in design of the search
algorithms as well 2. Whole software setup is consisted of
more than 3000 lines of c++ code, and around 10 classes. It
was very important to consider several performance issues
during the imlementation, such as correct-const-ness issue,
and de-allocation.
Since nodes are allocated dynamically from the heap
region, it is very important to release back the memory
acquired for a particular search operation.
Correct-const-ness is one of the most important good
practices while coding in C++ language. If a variable passed
by value to a function, a new copy of this variable is created
inside the new local scope. Therefore passing the variable by
reference would do the trick. However, one should be very
careful when the objects or data are passed via reference Fig. 1: Nodes are the structures that search algorithms make
since the value of that memory location stores can easily use of. Since nodes are templated stuctures any kind of data
can be stored inside them. Boards are used by nodes in this
*This study is a part of EE586 Artificial Intelligence course offered by problem where they are composed of Tile structures.
Dr. Afsar Saranli
Fig. 2: All of the search algorithms share similar functionalities which are parts of the search class.
Fig. 4: IDDFS algorithm. Actual solution time:0.257sec, ]nodes expanded:318, ]nodes expanded in the optimal path 12
Fig. 5: DFS algorithm. Actual solution time:6457.16sec, ]nodes expanded:168788, ]nodes expanded in the optimal path 9,
but solution is found to be at the 67562nd level. Please notice that the execution is being killed in valgrind case because
it takes about two hours even valgrind is not being activated. Therefore, we can estimate that this execution would take
approximately 3 or 4 days.
Fig. 6: A* algorithm with manhattan distance heuristic. Actual solution 0.016sec, ]nodes expanded:31, ]nodes expanded in
the optimal path is 12.
Fig. 7: A* algorithm with euclidean distance heuristic. Actual solution 0.018sec, ]nodes expanded:30, ]nodes expanded in
the optimal path is 12.
Fig. 8: A* algorithm with total number of misplaced tiles heuristic. Actual solution 0.0298sec, ]nodes expanded:113, ]nodes
expanded in the optimal path is 12.
Fig. 9: All of the algorithms except DFS finds the optimal path. DFS is not included in the experiments since it requires
days to simulate DFS for a problem of true-distance 4.
Fig. 10: Uninformed algorithms require much more larger time to solve problems having larger depth values. It worths
noting that IDDFS requires much more time than the BFS algorithm. Although they should perform theoretically similar,
starting the search from very first -in the case of IDDFS- requires releasing the memory and allocating again which is a
serious overhead. This problem can be overcomed by storing nodes in a hash-table or by keeping track of already allocated
nodes to avoid de-allocation+allocation procedure.
Fig. 11: The difference between IDDFS and BFS is not that conceivable as in it is in the figure 10. It is expected to be less
if the number of simulations is increased.