Professional Documents
Culture Documents
1 Introduction
Vision systems play crucial role in various types of robots [9,10] and unmanned
vehicles [1,2]. The autonomous ying robots (also referred to as autonomous
ying agents) are the class of unmanned mobile robots that operate in the air.
They are equipped with sensors that are used to collect information about sur-
rounding environment [5,6]. Beside the need to collect mission-specic data such
information enables the robot to nd a collision-free path between obstacles.
Another typical challenge for the mobile agents is to identify their location in
space [2]. Such task is dicult as it has to be solved on-line [10]. During the robot
ight it has to process information quickly to operate condently in complex and
sometimes changing environment. In this paper the algorithm of two-dimensional
scene analysis is presented. The presented algorithm is the rst step in creating
a visual-based system for an autonomous ying agent system of scene analysis
and understanding in the context of navigation possibilities. It should be stressed
that the problem of the image understanding is far more general and abstractive
than pattern recognition - see [11,12] in the context of medical images.
L. Bolc et al. (Eds.): ICCVG 2012, LNCS 7594, pp. 304312, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Syntactic Algorithm of Two-Dimensional Scene Analysis 305
2 Problem Formulation
Let us consider an autonomous ying robot equipped with the map which is
a preprocessed satellite image of an urban environment. The robot has to be
equipped with the camera pointed in the ground to be able to take the pictures
of the surface that it ies above. The map that the robot carries presents the
buildings extracted from the base satellite image (see Fig.1). The problem of
preprocessing which, beside others, consists of object extraction from the image
is out of the scope of this paper.
In order to nd its location on the map the robot takes successive pictures of
the ground below. Then it compares the extracted shape of the building from
the picture and locate it in the bigger map. The example of the picture with
one building is shown in Fig.2. The method of the objects recognition has to be
rotation and scale invariant as the pictures of the ground are taken from dierent
altitudes and various directions of the robots ight. This problem belongs to the
group of tasks consist in recognition and representing polynomial-type object by
a robot sensory systems [4].
Fig. 2. Preprocessed picture presenting the small area with only one building
Syntactic Algorithm of Two-Dimensional Scene Analysis 307
starting point
direction of search
window size S = 9
Fig. 3. Search around the window, starting from the point outside the building contour
of a building) the next point is found using the window. If starting point of the
window lies outside the contour of a building the search of the border point is
conducted clockwise (see Fig.3). In the other case counter-clockwise (see Fig.4).
The search conducted in such manner leads to circling around the building con-
tour clockwise. It is important for the object recognition algorithm to build the
list of points in the same direction in all cases. The output of this step is the
sequence of points which can be interpreted as a sequence of vectors located
around the building contour (a point that is not the rst or last in the sequence
is the end of the one vector and the beginning of the next one).
starting point
direction of search
window size S = 9
Fig. 4. Search around the window, starting from the point inside the building contour
308 A. Bielecki, T. Buratowski, and P. Smigielski
After obtaining the long sequence of points in the contour it is vital to simplify
that representation before running the object recognition algorithm. In this part
the new sequence of points based on sequence created in previous step is created.
The points in new list represent the corners of the building. To avoid the situation
in which there are corners with no point located exactly on it (that would result
in the eect of bending walls) the corners are not taken straight from the list of
points but calculated in the way described below. The nal list is obtained in
the following two steps:
1. The algorithm iterates through the list of points and searches for those that
deviate from the line determined by two previous points (see Fig.5). Let us
have four following points A, B, C and D (point D is actually processed).
Two angles are calculated. One given by vector (A, B) and point C (with
apex in point B) and another given by vector (B, C) and point D (with apex
in point C). If the sum of those angles exceeds the given threshold T the
following three points are added to the new list (the sequence is important):
B, q, D. Point B is the apex of the rst angle and D is the end point of the
second angle (B and D belong to two adjacent walls of the building). Point
q which plays the role of a marker is added to detect the placement of the
corner in the next step. The output of this step is the list having following
form (X1 , X2 , q, X3 , X4 , q, X5 , X6 , ..., Xn3 , Xn2 , q, Xn1 , Xn ).
b
a
C
B D
A
If t he sum of subsequent angles exceeds
t he given threshold points
B and D are put in t he list along wit h
t he m arker q bet ween t hem , denot ing
t he placem ent of a corner
2. The algorithm iterates through previously obtained list and searches for
markers q. When the marker is found on the list two preceding and two
following points are taken for further computations (for the example let
it be X1 , X2 , X3 , X4 ). The lines determined by the vectors (X1 , X2 ) and
(X3 , X4 ) are determined. The crossing point C of that lines is taken as the
Syntactic Algorithm of Two-Dimensional Scene Analysis 309
actual corner of the building. As a result we obtain the list which consists
of the starting corner and the rest of the corners that replaces the markers
(X1 , C2 , C3 , ..., Cm ), where m is the overall number of corners (and walls in
the same time) in the building.
There is a special case in which a marker does not have four neighbour points
but only one on left or right side. It can happen, when two or more corners are
found very close to each other. In that case crossing point can not be calculated
and one of the neighbours of marker is taken as the corner point (in tests it was
preceding one). The vectorization algorithm was run for the big map and the
picture of single building. The results of the algorithm are presented in Fig.6
and Fig.7.
The aim of the object recognition algorithm is to nd the building in the big map
that is similar to the one in the small map. The input for this algorithm consists
of two vectorized maps. One is the map presenting large area lled with buildings
and the other has only one building, similar to one placed in the big map,
but scaled and rotated. The algorithm transforms each vector representation
of the buildings from both the big and the small map into a more suitable
representation. The shape representation which is utilized by this algorithm is
based on the notion of the nominal features [13]. After the transformation the
algorithm takes each new representation of the building from the big map and
compares it to the new representation of the building from the small map. The
comparison is conducted in the following way:
features of the second building is searched in the doubled list (see Fig.8). To
make this algorithm less prone to inaccuracies of the process parsing the raw
pictures and building the vector representation the matching values do not
have to be exactly equal. Instead, they are comparedwith a given threshold
(). Formally the compared angles are matching if 1i 2j < , where
1i is the angle from the rst list and 2j is the angle from the second list.
If the overall match between angles is found the quotients of matching wall
lengths is calculated. If all the quotients are similar (the standard deviation
is below the given threshold) the buildings are treated as similar and the al-
gorithm quits. If the quotients dier signicantly (the lengths of the paired
walls are dierent with respect to the scale) the search for the matching
substring is continued. The maximum number of checks is equal to m.
[[45, 12], [315, 12], [225, 3], [135, 8], [225, 4], [315, 8], [225, 3], [135, 12]]
[[45, 17], [315, 40], [45, 21], [135, 40], [45, 18], [315, 61], [225, 57], [135, 61]]
[[270, 12] [270, 12] [270, 3] [90, 8] [90, 4] [270, 8] [270, 3] [270, 12] [270, 12] [270, 12] [270, 3] [90, 8] [90, 4] [270, 8] [270, 3] [270, 12]]
[[270, 17] [270, 40] [270 21] [270, 40] [270, 18] [270, 61] [270, 57] [270, 61]]
4 Concluding Remarks
The results of the vectorization algorithm show its ability to locate accurately
and circle round the object in the picture. The representation requires the min-
imum amount of memory as it consists only of buildings corners locations. The
described method shows the ability to nd the match between scaled and rotated
objects. Besides that, it can nd similarity between not exactly the same build-
ings as the quotients of wall lengths can vary to the given threshold. Both algo-
rithms are fast and memory ecient. That is important because they are meant
to be used in on-line processing of the pictures collected by the autonomous
ying agent. The described results are the rst step in creating a visual-based
system for a ying agent localization based on syntactic scene analysis. It should
be mentioned that syntactic methods for scene analysis based on graph approach
have been considered [3] also in the context of aid them by probabilistic [7,8]
and fuzzy [1] methods. Parallel parsing has been studied as well [1].
312 A. Bielecki, T. Buratowski, and P. Smigielski
References
1. Bielecka, M., Skomorowski, M., Bielecki, A.: Fuzzy syntactic approach to pattern
recognition and scene analysis. In: Proceedings of the 4th International Conference
on Informatics in Control, Automatics and Robotics ICINCO 2007, ICSO Intel-
ligent Control Systems and Optimization, Robotics and Automation, vol. 1, pp.
2935 (2007)
2. Filliat, D., Mayer, J.A.: Map-based navigation in mobile robots. A review of local-
ization strategies. Journal of Cognitive Systems Research 4, 243283 (2003)
3. Flasinski, M.: On the parsing of deterministic graph languages for syntactic pattern
recognition. Pattern Recognition 26, 116 (1993)
4. Katsev, M., Yershova, A., Tovar, B., Ghrist, R., LaValle, S.M.: Mapping and
pursuit-evasion strategies for a simple wall-following robot. IEEE Transactions on
Robotics 27, 113128 (2011)
5. Muratet, L., Doncieux, S., Briere, Y., Meyer, J.A.: A contribution to vision-based
autonomous helicopter ight in urban environments. Robotics and Autonomous
Systems 50, 195229 (2005)
6. Sinopoli, B., Micheli, M., Donato, G., Koo, T.J.: Vision based navigation for an un-
manned aerial vehicle. In: Proceedings of the International Conference on Robotics
and Automation ICRA, vol. 2, pp. 17571764 (2001)
7. Skomorowski, M.: Use of random graph parsing for scene labeling by probabilistic
relaxation. Pattern Recognition Letters 20, 949956 (1999)
8. Skomorowski, M.: Syntactic recognition of syntactic patterns by means of random
graph parsing. Pattern Recognition Letters 28, 572581 (2006)
9. Tadeusiewicz, R.: Vision Systems of Industrial Robots. WNT, Warszawa (1992)
10. Tadeusiewicz, R.: A visual navigation system for a mobile robot with limited com-
putational requirements. Problemy Eksploatacji 4, 205218 (2008)
11. Tadeusiewicz, R.: Medical Image Understanding Technology. Springer, Heidelberg
(2004)
12. Tadeusiewicz, R.: Automatic image understanding - a new paradigm for intelligent
medical image analysis. Bio-Algorithms and Med-Systems 2(3), 39 (2006)
13. Tadeusiewicz, R., Flasinski, M.: Pattern Recognition. Polish Scientic Publishers
PWN, Warsaw (1991)