You are on page 1of 6

18th International Conference on Electronics, Communications and Computers

Localization Control for LEGO Robots Navigation


Antonio Benitez
Ingeniera en Informatica
Universidad Politecnica de Puebla
antonio.benitezruiz@gmail.com

Cristian Josue Moreno


Instituto de Investigaciones Psicologicas
Universidad Veracruzana
crijos2000@msn.com

Daniel Vallejo
Department of Computation Electronic and Mechatronic
Universidad de las Americas - Puebla
daniel.vallejo@udlap.mx

Abstract

(on-off, view, prgm and run). The IR port is usually used


for communication with a computer e.g. when downloading
firmware or programs. The communication is transmitted to
the computer through an IR tower, which is also included
in the RIS. The port is not limited to this way of communicating, so the RCX can also communicate with other units
equipped with an IR port e.g. another RCX or a palm PC.
The input ports are used for attaching sensors to measure
temperature, light intensity, etc., and output ports are used
for powering motors, lamps, etc.. On the LCD screen you
can get the RCX to write numbers or words of up to four
digits, it also shows whether a program is running or not
and which program is currently in use as you can download
multiple programs to the RCX.
The RCX can be programmed using Robolab which is
included in the RIS. This software is fairly easy to use as it
is icon based. All you have to do is drag a component on
to the workspace and set some constants e.g. set the number of seconds a motor should run. The program is made so
simple because the target group for the RIS is kids aged 12
and up.
Actually adults have found the RCX very exciting as it is
very easy to build and alter robots in LEGO. Therefore more
advanced programs for controlling the RCX have been developed in order to take full control of the powerful RCX
unit. Nevertheless, there is not exits a program capable to
simulate a control to measure distances when the robot is
moving forward.
The majority of the robot navigation algorithms use
proximity sensors or image recognition process as tools to
build a localization control, thats mean, something that allow us to know where the robot is at any time. So, this work
is focus to incorporate this functionality to LEGO RCX
architecture, it can be used to develop more applications,
particularity, applications to explore and navigate environ-

This paper describes the implementation of an localization control for LEGO MindStorms RCX architecture. Taking advantage of this control, we describe: why a differential architecture was selected to design a vehicle, and how
a navigation algorithm was implemented to find a free collision path from initial to goal configurations. Besides, a
graphical interface to simulate the behavior of the algorithm and to verify the robot position and direction is described. Interesting results where the vehicle as able to interpret the algorithm results to reach the goal configuration
on physical environment are presented.

1 Introduction
This project is the wish to include a LEGO rig in courses
on automation control to enable graduate students to transfer theory into practical experiments [3]. This kind of
projects are offered to student interested in this field, and
this project is the third of that kind [9]. The two previous
are related to basic behaviors.
LEGO bricks are cheap and can be put together in any
combination, and in 1998 LEGO released the Robotics Invention System (RIS), which is a LEGO kit that lets you
take control over the robots you build. This is done by
means of a programmable LEGO brick. The RCX contains
a small micro controller that can be programmed using a
computer.
The RCX consists of a Hitachi H8300 processor and 32
Kb of RAM of which 28 Kb are safe to use for the firmware
that has to be downloaded in order for your programs to run.
The interface of the RCX consist of an IR port, three input
ports, three output ports, an LCD screen and four buttons

0-7695-3120-2/08 $25.00 2008 IEEE


DOI 10.1109/CONIELECOMP.2008.26

187

ments, where the localization of the robot is essential.

2 Construction of the Mobil Robot


The design and construction of an suitable movable robot
for the implementation of a localization control can be a
very important challenge, because does not exist any type
of guide of construction.
This section will describe the model that were followed
for the construction of our robot , as well as the position
and the use of each one of the fundamental parts that they
conform the movable robot [1].
In order to know what robot is going away to construct
it is necessary to choose the type of more appropriate configuration [5], that is, how the elements that compose to the
robot (for example:wheels eg:, platform, motors, etc.) are
going to be distributed. For this project a differential configuration was chosen because after several test it shown to
be the most stable [6].

2.1

Figure 1. LEGO Rotation Sensor Voltage


Graph

Motors and Sensors


Figure 2. Location of rotation sensors and
gears.

Some of the main components of a movable robot are the


sensorial motors [10]. The sensors used were the rotation or
angle sensor.
The function of these sensors is to take a count of the
returns or the account number that the caterpillar gives to
each rim or in this case [8], for that reason is necessary that
they are placed in an appropriate part of the robot, of such
form that has a direct connection or by means of gears with
the motors [4].
It must go connected of direct form to the motor not to
lose the count, is why it was chosen to connect it with gears
near the motor is possible to mention that the sensorial account in positive and negative, this depends on the sense in
which turns. With the aid of this sensor it allowed us to implement an algorithm of positioning control, compute the
range that the robot crosses based on the account number
that the sensor gives.
LEGO rotation sensor, was a bit tricky. The rotation
sensor implements a quadrature encoder with four distinct
states for each slot. The sensor has a four-slot encoder disk
to give 16 counts per revolution. The rotation sensor communicates this information as four distinct output levels.
Figure 1 shows the four levels, their corresponding states,
and how to decode those states into encoder counts. Note
that when moving forward, states change in one order, and
in reverse, the other order. By examining the previous and
current state, the software can determine whether the encoder is moving forward or backwards. The level for each
state was determined experimentally.

3 Developing the Localization Control


This control was the most important tool developed during this project, due the planning algorithm and the procedure to traduce the free collision path to robot instructions
are supported in this developed. So, it is important to say
that LEGO kit has not a mechanism to calculate the distance
that the robot cover during its operation, therefore, there are
no way to know where the robot is at a certain moment.

3.1

Motors Calibration

Motors in the LEGO kit has 8 velocities, nevertheless,


due the the robot is using caterpillar tractor, we needed to
use velocities greater than 4.
The motors calibration process consists of looking for
both motors reach the same account number(for each rotation sensor) through a combination of speeds on particular
period of time. This calibration process is supported on the
rotation sensors.
The calibration process can be run automatically, and
semi-automatically. In the first case, the algorithm calculate the velocity for each motor to assure the robot keeps
an straight line when it going forward. In the second case,
the user can help the algorithm to decide what velocities is

188

going to use, The user can indicate if the robot is moving


to left or is the robot is moving to right when it is moving
forward, and propose how the velocity will be increased or
decreased.
On the other hand, the automatic process consider every
combinations of velocities for both motors, and select that
one where the difference of the speeds is smaller.

3.2

are tested each 20cm. So, the motors needs to count 114
steps to the robots covers 20cm. Having the control over

Localization Control

To build the localization control, it was necessary to response several questions: a) How many counts accumulate
the sensor when the motors gives a revolution ? b) What
distance covers the robot per step and per sensor revolution
? c) How many steps are needed to turn 90 degrees the robot
(to left or right)? and d) How many steps are needed what
the robot covers a distance of 20cm?
After an experimentation stage, we could have an answer
for each question. Initially, we must observe in the robot design( Figure 2) that the movement of the motors goes trough
a gear of 8 jags, and when this lecture comes to the sensor
it use another gear DE 8 jags. These gears are equals, therefore the sensor counts in similar way as the motor does, that
means, each step in the motor is equal with each step in the
sensor, therefore we can conclude that for each motor revolution, the sensor gives 16 steps.
Next, we have to calculate the distance covers bye the
robot when the motor turn a revolution. To compute this distance, we use the Formula 1, and we get that the robot covers a distance of 2.82cm per each motor revolution. Due to
the gears diameter is equal to 9mm then the distance cover
by the robot when the motor turn a revolution is 28.8cm.
Distance = Gears diameter

3.3

Figure 3. Reference point moves approximately 14.3 cm. when the robot turn right or
turn left.
the movements (turn on the left, turn on the right, and forward) and distances that the robot covers, we can calculate
the position of the robot for each instant. It is important to
say that, before we tried to use the robot, a calibration algorithm must be run on the motors to assure that the robot
coves the distances in straight line.

4 The Planning Algorithm


Once the robot has been built, and we have implement a
positioning control, we can to provide the robot with a intelligent behavior capable to find a free collision path between
tow given configurations (Init and Goal Configurations).
As we can see in Figure 4, the environment is built using
quadrilaterals with 20c. This representation is only used
to verify if the robot is moving correctly. Besides, obstacles are added to the environment in such away that, each
obstacle must to take up only and only one quadrilateral.
This constrain allows the robot to move through distances
of 20cm. Finally, init and goal configuration are placed into
the environment.
The algorithm used to search a free collision path between init and goal configurations was the Dijkstra algorithm [2]. Nevertheless, to adapt the algorithm to the problem it was necessary to consider the follows: i) To represent
the environment through a Matrix, ii) To associate the matrix with a graph, and iii) to find a path between any tow
nodes.
After built the matrix environment and to associate to a
graph structure, the representation of the environment can
be seen in Figure 5, where nodes in red represent free positions in the environment, besides, the path found by Dijkstra
algorithm is painted in blue. It is important to note that the
robot can only move using orthogonal movements.

(1)

Control on Returns

The robot must be able to turn on the left and turn on the
right 90 degrees. To reach such objective, it is necessary
to calculate how many steps are required to the robot covers this distance, on the other hand, we now that the robot
covers a distance of 0.176714 per each step of the motor,
therefore, the number of steps that the motor has to give
to turn on the right or turn on the left will be calculates as
follows: N Steps = 14.3/0.176714, so N Steps = 80.92.
Therefore the motor needs to give 81 steps to the robot can
turn on the left or turn on the right. This can be seen in Figure 3. It is important to consider that the direction of turn
of the motors will determine if the robot gives a turn on the
left or turn on the right.
Using the same criteria, we calculate the number of steps
needed to the robot covers a distance of 20cm. (the environment is built of such away that, positions of the robots

189

ConvertPath(LNodos, LocationI, LocationG)


1.LocationRobot LocationI
2.LocationN extN ode P os and Or(N extN ode)
3. W hile(M oreN odes)
4. if (Or(N extN ode)! = Or(OrRobot))
5. GetT urnOrientation(N extN ode)
6. AddN oOrientations(ListM ovs)
7. GetN oquads(N extN ode)
8. AddN oquads(ListM ovs)
9. LocationRobot Location(N extN ode)
10. EndW hile

5 The Graphic User Interface

Figure 4. Simulated Environment.

Graphic user interface and algorithm related to localization control and planning were implemented using Borland
Delphi 7 on Windows XP [7]. To get the control and communication between the PC and the robot were used the Active X Panthom.dll controller which allow us to programme
the RCX from Delphi.
After design the user interface, it was necessary to make
an analysis about which operations were necessaries to built
a functional interface. So, the next list shows the different functions implemented on interface. The distribution of
these functions can bee seen in Figure 6.
1. Connection between the robot and the PC. This part
of the interface (part number 1), allows to communicate in both directions the PC and the robot using
the infrared tower. This function consider three states
(connected, disconnected and connection status).

Figure 5. Graph representation of the environment and path found by Dijkstra algorithm.

4.1

2. Environment representation. This functionality (part


number 2)is used to define the position of the obstacles, and the position and orientation of the Init and
Goal configurations. It is important to say that this
simulation is not need that the robot is connected. Besides, environment representation include operations
to: i) define the number of rows and columns which
conform the environment, ii) an option to build the environment, iii) an option to place an obstacle inside,
iv) an option to specify the position and orientation for
Init and Goal configurations, v) and finally, an option
to clear the environment to allow the user to define a
new one.

Traducing path planning to robot instructions

To reach the robot can interpret the free collision path


found, it was necessary to convert the free path into robot
instructions.
Initially, as part as the localization control, there was
implemented two basic routines:
Move Forward. The robot move in straight line along
20 cm. Motors are sets to same velocity and their directions
are the same.
Turn-Left and Turn-Right. The robot will turn left (90
degrees) or turn right (90 degrees). Motors are set to same
velocity but their directions are completely opposite to each
other.
The algorithm used to convert the path into robot
instructions is the follows:

3. Searching the path. Here the interface allows us to


build the graph, apply the Dijkstra algorithm and to
traduce the path found to robot instructions. Of course,
the robot must be connected to computer to execute the
last operation. Searching the path include operations
to: i) build the connectivity graph, ii) solve the graph
using Dijkstra algorithm, iii) View the list of nodes in

190

the path, iv) modify the environment, and v) convert


list of nodes in the path into robot instructions. This
last operation also sends the instructions to the robot.

To have an idea about what is mean a test on this context,


the next steps will help us to understand.
1. To connect the infrared tower to computer.

4. Tools. Inside tools options, we can find functions


to verify the status of the robot, for example: verify
the charge of battery, review if the firmware has been
download, and to apply the calibration motors process.
Besides, the user can use a radar option, this option
allow us to see inside the simulation the position and
orientation that the robot has at every moment, at time
that the robot execute its instructions on the physic environment.

2. To set up the communication between the computer


and the robot.
3. To place the obstacles on the physic environment, and
to place the robot on its orientation and initial position.
4. Build the environment inside the interface.
5. To build the connectivity graph.

5. Simulation Environment. Here, a simulation of the


environment and the robot is painted, we can see the
obstacles positions and how the robot is covering the
path, this part of the interface can be see in Figure 8.

6. Search a free collision path.

6. State Bar. This bar is used to show information about


how the interface is working, and is divide into four
parts: i) the first one gives information about the movements done on the environment (obstacles, configurations, and so on.), ii) the second bar shows the position
of a cell when the mouse is dragged on it. iii) the third
one, shows the robots position for each configuration.
and iv) the fourth one, gives the position that the radar
has when the covering the path on the environment.

9. Place the robot on the environment and choice the slot


were the path will be download.

7. To run the motors calibration process.


8. Download the path to robot.

10. Set up the radar to verify the robots position on the


environment while it cover the path.

Figure 7. Covering the path without using Motors Calibration.


Figure 6. Graphic User Interface.
It is important to remember that in order to reach better results, battery charge must be verify before use it.
In Figure 7 and Figure 8, results of two different tests on
the same sample are presented. In the first one, calibration
motors are not consider. As we can see, even though the
path is completed, the robot does not keep a straight line,
therefore the path begins to invade positions in the environment outside the path.
On the other hand (Figure 8), using motor calibration

6 Results of the Project


Several tests were made using the graphic user interface
to verify its functionality. Each stage was tested with different samples, and special attention was given to test on the
user interface,and test on the user interface and the robot on
the physic environment.

191

process, we can reach a better result about the robot performance, that mean that, the robot keep a straight line when it
is moving forward.
As we can expect, the robots precision is going to de-

tion (obstacles positions, and localizations of Init and Goal


configurations), the implementation of the planning algorithm and the conversion of the path into robot instructions.

7.1

Future Work

Initially we are thinking to build an algorithm which is


able to explore an environment, looking for the obstacles
positions and reporting this localization to build a roadmap
that can be used as the input for the application described in
this paper.
LEGO has evolved into LEGO NXT, therefore, it would
be interesting the algorithm developed to LEGO RCX to
this new technology NXT.

References
[1] Balaguer Carlos Barrientos Antonio, Pein Lus Felipe
and Aracil Rafael. Fundamentos de Robotica. Edit.
Mc Graw Hill, 1997.

Figure 8. More accurate using Motors Calibration.

[2] Rivest Ronald L. Cormen Thomas H., Leiserson


Charles E. and Stein Clifford. Introduction to Algorithms, 2da. Edicion. Edit. McGraw-Hill, 2001.

pend of the size path, due the robot accumulate errors each
turn.

[3] Miglino Orazio, Hautop Lund Henrik, and Cardaci


Mauricio. La robtica como herramienta para la educacion. http://www.donosgune.net/2000/dokumen/,
2000.

7 Conclusions and Future Work


After test the application, we can say that we did a good
selection about the algorithms used to integrate the system.
Initially and with special attention was the implementation of the localization control. This developed allow us to
have an important tool to calculate the position and orientation for each robot movement. Although it is important to
consider the motors architecture is very unstable, and therefore it was necessary to implement a motor calibration process to assure that the robot keep an straight line when it
is moving forward, and try to solve this kind of problems.
This is the main contribution of this project, the developed
and implementation of the localization control and the calibration motors algorithm.
The implementation of the localization control takes importance due the LEGO RCX kit does not have this functionality, therefore, the incorporation of the localization
control to the LEGO architecture gives the robot the capability of develop more interesting behaviors.
As application of the localization control, Dijkstra algorithm was implemented to search a free collision path between two given configurations. Besides, and algorithm to
traduce the path to robot instructions are supported on the
same localization control algorithm.
And finally, a graphic user interface was developed to
simulate de robot behavior, that is, the environment defini-

[4] Borenstein J., Everett H. R., and Feng L. Where am


i? sensors and methods for mobile robot positioning.
1996.
[5] Knudsen Jonathan. Lego mindstorms: An introduction. http://www.oreillynet.com/pub/a/network/2000/,
2000.
[6] Sanchez Colorado Monica Maria. Ladrillos programables para robtica educativa lego vs crickets.
http://www.eduteka.org/LegoCricket.php.
[7] Paul
Scholz
Matthias,,
Bagnall
and Griffiths Lawrie.
The lejos
http://lejos.sourceforge.net/tutorial/.

Brian,,
tutorial.

[8] Gordon McComb. and Mike Predko. Robot Builders


Bonanza, Third Edition. Edit. McGraw-Hill, 2001.
[9] Niku B. Saeed. Introduction to robotics analysis, systems, applications, edit. prentice hall. 2001.
[10] Torres Fernando, Pomares Jorge, Gil Pablo,
Puente Santiago T. and Aracil Rafael. Robots y
sistemas sensoriales edit. pearson educacin, madrid.
2002.

192

You might also like