Professional Documents
Culture Documents
6, December 2016
designed in the Simulink were deployed into the ARM image of a scene with an ARTag (marker with unique
processor on the RPi2 via the Ethernet cable. code used e.g. for indoor robots navigation) and the
resulting edge-image using the Sobel operator.
maximums in this metric map depicted into the original The lines detected in the input image are, again, very
image by purple crosses. convenient features especially for robot navigation in
both indoor and outdoor environments [1].
between the RPi2 and computer with Simulink models. A second) independently for all models. Moreover, each
performance of the RPi2 platform on machine vision model has been tested at four generally used resolutions
tasks were measured by the FPS indicator (frames per 160x120, 320x240, 640x480 and 800x600 pixels.
At the most top position in the Fig. 8, the Simulink block outputs besides the rho and theta vectors also a
model of the edge detection is depicted. This model uses Hough space, it means already mentioned accumulator.
a block of the Sobel operator with the two output signals The next block Local Maxima finds no more than given
Gv and Gh related to the vertical and horizontal edges, number of strongest peaks in the accumulator and outputs
respectively. The signals Gv and Gh are added up and rho and theta coordinates in it. Note that the range of
straightforwardly distributed to an output display. Further, theta is /2<+/2 with a step-size /180 radians. It
in the middle of the Fig. 8, the Simulink model of the means lines in the input image are detected in precision
corner detection is depicted. The corner detection block of 1 degree and left and right side of the Hough
implements corners detection algorithm in the input accumulator in the Fig. 5 relates to line inclination of -90
grayscale image. The block has three options of degree and +90 degree (vertical lines). The last important
algorithms used: the Harris&Stephens, the Shi&Tomasi block, however needed only for visualization purposes, is
(both described above) and the Rosten&Drummond. Note Hough lines intended for computation of ending points
that the last one mentioned algorithm uses a local of the detected lines. These ending points are
intensity comparison, an effective technique resulting in subsequently used for imprinting the lines into the input
that algorithm is approximately twice as faster as the each RGB image. Last, but not least, several submatrix and
selector blocks have been employed for the rho and theta
one from the previous couple. Furthermore, a maximum
vectors operations.
number of detected corners were saturated by a number
of 100 and the parameter (kappa) in the equation (3)
IV. EXPERIMENTAL RESULTS
were permanently equal to 0.04 during our experiments.
Finally, at the bottom position in the Fig. 8, the As it was already mentioned in the text above, all the
Simulink model for the line detection by means of the three Simulink models were designed, implemented and
Hough transform is depicted. As can be seen from the finally verified and tested on the RPi2 platform at four
Simulink model, the Hough transform block works with different resolutions. The performance of the feature
the thresholded (i.e. binary) image obtained by the Sobel extraction methods has been measured and compared in
operator applied directly on the input RGB image. This this paper by means of the frames per second indicator.
TABLE I. COMPARISON OF FPS VALUES FOR THE DESIGNED [3] M. Richardson and S. Wallace, Getting Started with Raspberry Pi,
MODELS AND DIFFERENT IMAGE RESOLUTIONS Maker Media, Inc., 2012.
[4] D. Vernon, Machine Vision: Automated Visual Inspection and
FPS 160x120 320x240 640x480 800x600 Robot Vision, Hemel Hempstead: Prentice Hall International Ltd.,
1991. p. 260.
Edge detection
72.4 18.2 4.5 2.9 [5] J. C. Russ, The Image Processing Handbook, Boca Raton, Florida:
(Sobel operator)
CRC Press Inc., 1994.
Corner detection [6] W. G. Kropatsch and H. Bischof, Digital Image Analysis,
22.6 4.7 1.2 0.7
(Harris&Stephens) Springer-Verlag New York, Inc., 2001.
Line detection [7] R. C. Gonzales and R. E. Woods, Digital Image Processing, third
19.6 5.7 1.6 1.0
(Hough transform) ed., Pearson Education, Inc., 2008.
[8] I. T. Young, J. J. Gerbrands, and L. J. Vliet, Fundamentals of
Both the internal (in Simulink) and external (deploying Image Processing, TU Delft, 1998.
[9] R. P. Nikhil and K. P. Sankar, A review on image segmentation
to RPi2) modes were tested, nevertheless only the second techniques, Pattern Recognition, vol. 26, no. 9, pp. 1277-1294,
one is important for our purposes of the cheap robotics September 1993.
platform. Values of FPS are shown in the Table I. [10] K. Horak, Detection of symmetric features in images, in Proc.
Note that all models described in the previous chapter International Conference on Telecommunications and Signal
Processing, 2013, pp. 895-899.
were deployed directly onto the RPi2 platform during the [11] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis
test stage, but resultant images were displayed back on a and Machine Vision, Toronto: Thomson, 2008.
monitor of a workstation with the Simulink due to [12] C. P. Papageorgiou, M. Oren, and T. Poggio, A general
absence of a Raspberry Pi proprietary display. A transfer framework for object detection, in Proc. 6th International
Conference on Computer Vision, 1998, pp. 555-562.
of image data between the RPi2 platform and the [13] D. G. Lowe, Distinctive image features from scale-invariant
workstation may influences the FPS values in the table. It keypoints, International Journal of Computer Vision, vol. 60, pp.
follows, the values in the Table I. relate to the worst cases 91-110, 2004.
and the FPS values should be higher if any visualization
block had not been implemented or had been
implemented directly on the RPi2 platform. Karel Horak was born in Pardubice, Czech
To sum up, we may conclude that the measured FPS Republic, on August 1, 1979. He received the
values are relatively good and the RPi2 platform is M.Sc. and Ph.D. degrees in cybernetics,
control and measurements in 2004 and 2008,
certainly useable for majority of usual robotics tasks, respectively, from Brno University of
especially when optimizations over the Simulink schemes Technology.
and target code (C/C++) have been carried out. He is currently working as an assistant
professor at the Department of Control and
Instrumentation, BUT, where he heads the
ACKNOWLEDGMENT Computer Vision Group. He has authored and
co-authored almost fifty papers and journal articles in areas of computer
This work was supported by the grant Research of vision and image processing.
New Control Methods, Measurement Procedures and
Intelligent Instruments in Automation No. FEKT-S-14-
2429 from the Internal Grant Agency at BUT. Ludek Zalud was born in Brno, Czech
Republic, on August 18, 1975. He received
the M.Sc. and Ph.D. degrees in cybernetics,
REFERENCES control and measurements in 1998 and 2002,
respectively, from BUT.
[1] L. Zalud, P. Kocmanova, F. Burian, and T. Jilek, Color and He is currently working as an associate
thermal image fusion for augmented color and thermal image professor at the Department of Control and
fusion for augmented reality in rescue robotics, Lecture Notes in Instrumentation, BUT, where he heads the
Electrical Engineering, vol. 291, pp. 47-55, 2014. Robotics Group. He has authored and co-
[2] J. Klecka and K. Horak, Fusion of 3D model and uncalibrated authored more than one hundred papers and
stereo reconstruction, Advances in Intelligent Systems and journal articles in area of robotics and bioengineering.
Computing, vol. 378, pp. 343-351, 2015.