You are on page 1of 7

adfa, p. 1, 2011.

Springer-Verlag Berlin Heidelberg 2011


Autonomization of Two-Wheeled Inverted Pendulum
Robot Kinect-Based Obstacle Detection and Avoidance
System
Daniel Dreszer, Kamil Gnacik, Pawe Kaleta, Oliwia Szymaska
Student Science Club of Robotics "Encoder",
Institute of Automatic Control, Silesian University of Technology,
Akademicka 2A, 44-100 Gliwice, Poland
{danidre549,kamigna296,pawekal744,oliwszy419}@student.polsl.pl
http://encoder.polsl.pl
Abstract. The article describes the process of autonomizing an inverted pendu-
lum robot by adding a surroundings' monitoring system based on Microsoft
Kinect sensor and obstacle detection and avoidance algorithm implemented in
National Instruments LabVIEW programming environment. The hardware used
to allow for adding this new functionality is described and the processing of
depth data acquired from Kinect sensor along with the algorithm searching for
obstacle-free areas is explained. The presented project is the outcome of the
work of student science club.
Keywords: robotics, inverted pendulum robot, obstacle detection and avoid-
ance, Kinect, LabVIEW.
1 Introduction
As a part of the work of Student Science Club of Robotics "Encoder" at Silesian Uni-
versity of Technology, the authors of this paper created an autonomous two-wheeled
robot. The project is a continuation of master thesis of Tomasz Fortuna [1], who built
an inverted pendulum robot which was capable of maintaining balance and driving in
a given direction. The intention of the authors was to give the robot autonomy, so that
it could independently move and avoid obstacles - a goal being one of the most im-
portant problems in robotics [2]. This was achieved using Microsoft Kinect sensor,
which is equipped with infrared depth camera. The system runs on a small PC and
National Instruments sbRIO controller cooperating with it, both mounted on the robot
frame.
The paper is organised as follows. Section 2 lists the hardware improvements made
to adapt the robot to the new functionality. Section 3 presents various aspects of the
obstacle detection and avoidance system: characteristics of the hardware used for
monitoring robot surroundings, a short explanation of the choice of programming
environment used for software creation and a detailed description of the implemented
algorithm. Experimental setup and the results of the conducted test of the system are
presented in section 4. Finally, section 5 describes the plans for further development
of the project.
2 Improvement of the Existing Robot Construction
The first task the team had to face was to make the robot meet all new requirements.
The improvements made are related to the motors and the computing unit that con-
trols the way the robot works.
Originally, the robot was steered with a microcontroller-based unit. Its capabilities
included acquiring data from accelerometer, controlling the motors and communi-
cating with remote user with a device based on Xbee standard.
One of the ideas even before starting the project was to use Kinect in order to de-
tect obstacles in robots surroundings. However, to make this possible, all its hard-
ware requirements had to be taken into account. First of all, the sensor needed a plat-
form controlled by Windows 7 operating system. To make this happen, we build a
compact computer based on Mini ITX mainboard. Further, to ensure the sensor works
properly, we installed IntelCore i3 CPU and 4GB of RAM memory on the computer.
Operating system already mentioned runs on an SSD disk, which makes it more re-
sistant to vibrations that appear during the use of the robot.
The next area improved was, as was mentioned before, the motors. Our first con-
cern was the fact that the robot used encoders with poor resolution only 15 pulses
generated per revolution. Whats more, the measurements they were taking were not
used in the stabilizing algorithm nor during the movement. This made the develop-
ment we planned impossible. The solution to this problem was to improve the stabili-
zation algorithms with additional feedback loop containing data acquired from the
encoders. Moreover, the encoders installed now also have improved resolution, by far
of 24 pulses per revolution. All of these changes let the robot hold the position more
effectively, and whats very important for the purposes of autonomization, the robot
can now compute the distance it covered.
One more very important change is the use of sbRIO 9636 platform (for manual
see [3]) made by National Instruments instead of the microcontroller-based unit. Its
fundamental advantage is an FPGA unit, which makes the whole system work more
efficiently. Whats more, the number of analog inputs and outputs not only meets the
current needs, but also gives some space for additional sensors.
3 Obstacle Detection and Avoidance
The basic objective of the project was to ensure collision-free robot movement in
unknown indoor surroundings. This requires monitoring the environment with sen-
sors; a decision was made to use for this task a Microsoft Kinect for Windows sensor.
At first, its producer assumed it would only act as a peripheral device for Xbox 360
game console, but the wide range of abilities of the sensor and great interest they
caused among the programmers, engineers and hobbyists made Microsoft release an
official SDK (Software Development Kit) for Windows 7 operating system which
enables data acquisition from individual sensors constituting the Kinect device.
A feature of Kinect especially important for the purpose of robot autonomization is
its capability of taking distance measurements using built-in infrared light emitter and
camera. The data returned by the sensor is a depth image of resolution 640480,
320240 or 8060 given at a rate of 30 fps. In this project, the highest resolution was
used. Each pixel is the distance in millimetres between a detected object and the verti-
cal plane with the sensor. It is worth noting here that the hardware and firmware ver-
sion available for us has an operating range of 80 cm to 4 m of distance. Because of
that in the future it is going to be necessary to use additional distance sensors, e.g.
ultrasonic, to monitor the gap 0 80 cm.
Obstacle detection and avoidance algorithm was implemented in National Instru-
ments LabVIEW environment, with the use of Kinesthesia Toolkit for Microsoft Ki-
nect [4] created by students at University of Leeds. The toolkit is a LabVIEW wrap-
per for the DLL (Dynamic Link Library) included in the SDK; it provides some addi-
tional functions for processing the acquired data as well. This programming environ-
ment was chosen because of several reasons. Firstly, it provides a way of fast and
effective code creation and easy modification, as LabVIEW is a graphical program-
ming language. Additionally, parallel code execution is inherent in it, which enables
utilization of both cores of the processor mounted on the robot without doing addi-
tional work of creating a multithreaded application. Moreover, the sbRIO platform
controlling keeping balance and movement of the robot must be programmed in Lab-
VIEW, so communication between the two programs on the computer and on
sbRIO was easy to achieve using the mechanisms offered by the programming envi-
ronment, e.g. shared variables. Using LabVIEW also lets the authors realize their plan
to take part in the international programming and engineering contest NI LabVIEW
Student Design Competition by presenting the described robot there.
The obstacle detection algorithm involves cyclic reading of depth data frames and
transforming them into row vectors. Each element of the vector is the pixel closest to
the sensor taken from the data frame column corresponding to this element. Before
searching for the minimum values of the columns, all pixels of zero value indicating
a distance out of sensor range are replaced with the maximum value that can be
measured by the sensor, to avoid getting false information about the presence of oc-
cupied space. This procedure can be applied as the sensor is positioned such that the
camera's vertical field of view approximately covers the area from the floor to the
height of the robot.
Next, it is determined if the distances constituting the row vector lie in the interval
[0; 1000 mm]. The upper limit was chosen as this because it is a value large enough to
lie within the working range of the sensor and to provide enough time and space for
the robot manoeuvres, if need be. At the same time such a limit is not larger than nec-
essary, so we do not demand a needlessly lot of space in the room.
The elements of the vector that satisfy the above condition form intervals corre-
sponding to areas free of obstacles up to 1 m of distance from the sensor. From among
these intervals the algorithm chooses only the ones wide enough for the robot to fit
with a defined several centimetres' margin of safety. Conversion of interval width
from number of pixels L to millimetres L
r
, to compare it with robot width, is done
using the information about the distance from the sensor of the pixels adjacent to the
interval. Also the horizontal resolution R of the depth image and the angle of hori-
zontal field of view of the camera (as given in [5]) are used in the conversion process.
To better describe this, a diagram showing an example scene with the sensor and
some obstacles is presented in fig. 1.
Fig. 1.

Diagram of Kinect sensor in a room with obstacles, with camera's angle of view shown and
distance of interest marked at 1000 mm. Light gray lines indicate the angle of view of the inter-
val of not occupied space. Parameter d is the distance between the sensor and objects being the
physical ends of this interval and is measured by the depth camera.
In such a simple setup, conversion between width in pixels and in millimetres is
not complicated as the width is measured in the plane of the image:

. (1)
However, it is not difficult to imagine a situation when the ends of the interval are
not equally distant from the sensor. This is shown in fig. 2. Then, neither substituting
d with d
1
nor with d
2
in eq. (1) gives the proper result. Still not perfect but neverthe-
less more accurate calculations are thus done according to the following formulae:

(2)

(3)

. (4)
Fig. 2.

Diagram of Kinect sensor in a room with obstacles where the objects at the ends of the free
space interval are not equidistant from the sensor and the physical width of the interval no
longer lies in the plane of camera image
After the location of the free areas was determined, the algorithm checks if one of
them lies on the current robot trajectory, allowing to continue the straight ahead
movement. If not, but some differently located obstacle-free areas were found, the
robot turns as little as possible only to avoid collision and fit into a free space interval.
If collision-free trajectory could not be determined, the robot is informed to turn back
and search for a path again.
The results of each single run of the algorithm, i.e. the necessary turns to make cal-
culated based on one depth data frame each, are averaged using a running average of
8 samples to eliminate the noise in depth camera measurements. Such an approach is
less time-consuming than doing the averaging already at the stage of depth data ac-
quisition, as then large matrices (images of size 640480 pixels) would have to be
averaged element-wise.
4 Experimental Verification of the Operation of the System
In order to verify the created algorithm, a test track was built. The graphics given
below in fig. 3 shows the details of its construction. They are very important because
all objects that physically could not be detected by the Kinect sensor were eliminated
from the created environment. These objects include glass elements, which do not
reflect infrared waves, and objects too close to the start or end points of the track to
fall within the working range of the infrared sensor. They pose a threat of damage to
the robot and had to be removed. Such objects have to be detected by other sensors,
such as ultrasonic ones.
Fig. 3.
Diagram of the test track
The test consisted of crossing the path from 'start' to stop and back (at stop point
the track was limited by a wall). The results were satisfactory since the developed
system allowed for passing the track without collisions. During the test the messages
sent to the control unit were monitored in order to verify their correctness. On the
basis of the test it can be concluded that the implemented algorithm is working
properly and does not require additional adjustments. Unfortunately, Kinect sensor
proved to be an insufficient tool to navigate in the real field, mainly due to the scope
of the field of view. As was mentioned before, as a solution to this problem it is pro-
posed to use additional sensors (ultrasonic and infrared sensors of short working
range) that will allow for eliminating the blind spots. This will allow for monitoring
the area near the robot and detecting sudden changes threatening collision-free robot
movement.
5 The Future of the Project
The actions presented in this article are only the first stage of work on the expansion
of robot functionality. Possibilities offered by the Kinect sensor used in the project
allow for the use of the robot in many practical tasks. After having provided collision-
free motion of the robot in an unknown environment, the team will be working on the
next concept of the project, which is searching for rooms with given numbers. The
environment in which the robot will move are thus mainly corridors in buildings.
Therefore, we will need to take into account the people walking along the corridor,
which is a much bigger challenge for the obstacle detection and avoidance algorithm
as they are moving obstacles. Development of the created system to recognize moving
objects requires development of algorithms that will allow to monitor the movement
of those objects. Moreover, work will be required on problems related to the specifici-
ty of the search of the rooms, like the strategy of moving in the building as well as
reading the room numbers from the panels placed on the doors. In the implementation
of the last task it is planned to use the built-in Kinect sensor's RGB camera. In addi-
tion, it is planned to enrich the existing system with a map builder. This will allow for
the possibility of determining more efficient routes for the robot. Ideas of solutions to
these problems have already been proposed; however, so far the basic autonomization
of the robot was a priority, of course.
Acknowledgement
The project reported in this paper has been partially financed by the European Union
from European Social Fund, as a part of Operational Programme Human Capital,
number POKL.04.01.02-00-020/10.
References
1. Fortuna, T.: Projekt, konstrukcja i oprogramowanie robota typu odwrcone wahado. Ma-
ster thesis
2. Trojnacki, M., Szynkarczyk, P.: Tendencje rozwoju mobilnych robotw ldowych (3). Au-
tonomia robotw mobilnych - stan obecny i perspektywy rozwoju. Pomiary, Automatyka,
Robotyka 9/2008, p. 5-9
3. OEM Operating Instructions and Specifications. NI sbRIO-9605/9606 and NI sbRIO-
9623/9626/9633/9636, http://www.ni.com/pdf/manuals/373378c.pdf
4. Kinesthesia Toolkit for Microsoft Kinect overview and download,
http://sine.ni.com/nips/cds/view/p/lang/pl/nid/210938
5. Brecher, C. et al.: 3D Assembly Group Analysis for Cognitive Automation. Journal of Ro-
botics, Vol. 2012 (2012)

You might also like