You are on page 1of 11

Local Structuring of Unstructured Service

Robots Environment

Vitaliy Rybak1, Dante Raúl Vásquez-Hernández1


1
Laboratorio de Robótica Inteligente, División de Estudios de Posgrado, Universi-
dad Tecnológica de la Mixteca, 69000, Huajuapan de León, Oaxaca, México

{rybak, dante}@mixteco.utm.mx

Abstract. The presented work is devoted to the development of means for local
structuring of unstructured environment to support the autonomous actions of mo-
bile indoor service robot with manipulation capacity. As the term “unstructured en-
vironment” we will understand such environment in which spatial states (position
and orientation) of the objects of interest with respect to the basic robot coordinate
system initially are unknown. There are presented the means to adapt the environ-
ment to the capacities of a service robot by constructing the working space model
of the manipulator used to pick up-and-place the objects of interest. Informative
!""#$%&' &() *+&#,&%-*&+*.*/#"0*,%&#1&),&)$%'23')/&',1#$0)%'.*&.' !)/&/),+0)$4&
%-)%&)//#5 &%-*&$#(#%&%#&+*2,*&'% & ")3')/& %)%*&5'%-&$* "*3%&%#&%-*&/),+0)$4&3##$-
dinate system and, hence, with respect to the map coordinate system, in which the
/),+0)$4&3##$+',)%*& 6 %*0&' &+*2,*+78 &)&.' !)/& *, #$&)&0#,#3)0*$)&' &! *+7

Keywords: Service Robotics, Environment Infrastructure, Visual Landmark, Ro-


bot Workspace.

1 Introduction
The new trends in robotics research have been denominated service robotics because of their
general goal of getting robots closer to human social needs. The recent development in the
robotics out of traditional industrial applications increasingly concentrates on the operation
of robots of multifunctional service in unstructured environment and interactions of human
being and robot. Some investigators relate such robots to a class of service robotics. They can
be mobile and with manipulation capacity. In dependence on its functions and applications,
the service robots are divided in robots of professional service and of personal service.
Professional Service Robots are used in a variety of applications at work, in public, in ha-
zardous environments, in locations such as deep-sea and space, and in defense, rescue and
security applications. They are more expensive than personal robots. According to the Inter-
national Federation of Robotics analysis, the total value of professional service robots sold

227
up to the end of 2008 was US$11.2 billion with total number of 63,000 units including 20,000
units the service robots in defense, rescue and security applications.

A personal and domestic robots need to be multi-functional and be able to perform a wide ran-
ge of tasks. Multi-functional robots will be able to carry out both complex and routine tasks
for people in a multitude of environments such as assisting aging populations, aiding people
with disabilities, helping in household chores, performing operational activities. Actually
service robots for personal and domestic use are mainly in the areas of domestic (household)
robots, which include vacuum cleaning and lawn-mowing robots, and entertainment and lei-
sure robots, including toy robots, hobby systems, education and training robots. About 4.4
million units for domestic use and about 2.8million units for entertainment and leisure sold
up to end 2008. About 940,000 vacuum cleaning robots and more than 21,000 lawn mowing
robots were sold in 2008. Almost 12 millions are forecasted to be sold between 2009 and
9:;9&$*"$* *,%',<&),&* %'0)%*+&.)/!*&#1&=>?&@&('//'#,7&A;B7&C-*&*D' %',<&%$*,+ &',&"$#+!3%'#,&
and sail of service robots for personal and domestic use leads to the situation when these
/#5E3# %& ',</*E1!,3%'#,&$#(#% &3),&#33!"6&%-*&-#0*&&', %*)+&#1&-!0), &A9B7&

To promote the creation and the practical use of the multifunctional personal and domestic
service robots two reciprocally complementary lines of the problem solution can be com-
bined. One is to increase the intelligent and mechanical capacities of the robot and another
one is to create an infrastructure of the robot environment parallel to the infrastructure of the
5#$/+&#1&%-*&-!0),&(*',<&%#& '0"/'16&%-*& #/!%'#,&#1&%-*&+'123!/%& 3'*,%'23&),+&%*3-,#/#<'3)/&
"$#(/*0 &$*/)%*+&%#&%-*&$#(#%&<#)/E+'$*3%*+&(*-).'#$&)&+6,)0'3&*,.'$#,0*,%&A@B7

The general goal of the presented work is to adapt the environment to the capacities of indoor
service robots. The concept of material and informative components of the robots infrastruc-
ture is introduced in practice. Material components are different in dependence on the infras-
tructure predestination: it can be the infrastructure of human-being (shared infrastructure),
or special one designed for robot (infrastructure for coexistence), or mixed one. Informative
components are mostly robot-oriented.

The nearest goal of the presented work is to develop the means of robot infrastructure that
makes available the autonomous goal directed actions of the mobile robot equiped with ma-
nipulator. The special problem that arises is to provide autonomous actions of manipulator
with mobile base. To make robots able to carry out such actions we propose the means for

228
constructing the model of the manipulator working space taking into consideration the acces-
'(/*&"# '%'#, &#1&%-*&2,)/&*1*3%#$&F<$'""*$G&5'%-&,**+*+&#$'*,%)%'#,7&H*&)/ #&"$#"# *&%-*&)$-
%'23')/&',1#$0)%'.*&.' !)/&0)$4&%-)%&)//#5 &%-*&$#(#%&%#&+*2,*&'% &@I&"# '%'#,&),+&#$'*,%)%'#,&
with respect to the mark 3D coordinate system. The position and orientation of the working
")3*&@I&3##$+',)%*& 6 %*0&)/ #&' &+*2,*+&5'%-&$* "*3%&%#&%-*& )0*&0)$4&3##$+',)%*& 6 %*07&
J%&)//#5 &%#&%-*&$#(#%&.' !/)/& *, #$&%#&+*2,*&%-*&"# '%'#,&),+&#$'*,%)%'#,&#1&%-*&5#$4',<& ")3*&
coordinate system with respect to the manipulator coordinate system. For the experimental
investigation we use the mobile robot PowerBot equipped with the manipulator PowerCube
FK'<7&;G&ALM&NB7

Fig. 1. Mobile robot PowerBot with manipulator PowerCube

2 Manipulator working space design

To simulate the possible material component of an environmental infrastructure, the mo-


del of the manipulator working space is presented by a parallelepiped with an inside 3D
grid. To verify the accessibility of a node by the manipulator gripper of given orientation
it is necessary to solve the inverse kinematiks problem. The general solution of the inverse
4',*0)%'4 &"$#(/*0&#1&%-*&0),'"!/)%#$&O#5*$P!(*&' &"$* *,%*+&',&AQB7&K#$&%-*& )0*&"# '-
tion and orientation of gripper the solution of the inverse kinematiks problem can give us a
1*5&+'11*$*,%&0),'"!/)%#$&3#,2<!$)%'#, 7&R#%&)//&#1&%-*0&)$*&)""/'3)(/*&1$#0&%-*&"$)3%'3)/&

229
%),+"#',%7&C#& '0"/'16&)&"$#(/*0&#1& */*3%'#,&#1&)&)+0' '(/*&0),'"!/)%#$&3#,2<!$)%'#,M&',&
the present work the node accessibility is analyzed with two admissible gripper orientations
E&-#$'S#,%)/&),+&.*$%'3)/&#,*M&),+&%-*&0),'"!/)%#$&3#,2<!$)%'#, &)$*& !3-&%-)%&)//&'% &*/*-
ments belong to a vertical plane. In this case the solution of the inverse kinematiks problem
' & '0"/*$&),+&%-*&,!0(*$&#1&+'11*$*,%&0),'"!/)%#$&3#,2<!$)%'#, &! *+&1#$&)+0' '(/*&.*$-
'#,& */*3%'#,&' & 0)//*$7&K'<7&9& -#5 &%5#&"# '(/*&0),'"!/)%#$&3#,2<!$)%'#, &1#$&%-*& )0*&
")3')/& %)%*&#1&<$'""*$&T&%-*&2$ %&#,*&' &"$*1*$'(/*7&

Fig. 2.&I'11*$*,%&3#,2<!$)%'#, &%-)%&3#$$* "#,+&%#&%-*& )0*&,#+*7

For every grid node located along the vertical direction the accessibility is examined indi-
vidually for vertical gripper orientation and for horizontal one. A vertical segment with the
3#,%',!#! & !33* '#,&#1&%-*&)33* '(/*&,#+* &' &+*2,*+&) &%-*& 3#"*&+*"%-7&8&3#,%',!#! &
set of the depths forms the accessible continuous working space with the gripper orientation
given.

In Fig. 3 the vertical sections of parallelepiped for two gripper orientations are presented.
The depths are shown as vertical continuous segments. It is visible that the working space
for vertical gripper orientation is not the same as for horizontal orientation (the graphics
have different scale).

C-*&! *$&3),&3-## *&)&")$%&#1& ")3*&5'%-&+* '$)(/*&+*"%-&F+*2,*+&(6&%-*&0)D'0)/&#(U*3%&


height and the distance of the object movement in vertical direction for vertical gripper
orietation). For selected manipulator kinematics in the case of vertical gripper orientation

230
for maximal object height of 240 mm and the distance of the vertical replacement more
than 240 mm, the accessible zone in horizontal plane is located between two circles of
radius 420 and 550 mm. formed by manipulator rotation about the vertical axis with ad-
0' '(/*&),</*&FK'<7&@G7&= ',<&%-' &',1#$0)%'#,M&%-*&! *$&3),&+*2,*&%-*&5#$4',<&.#/!0*M&
like rectangles in the horizontal section of the manipulator working space shown in Fig. 4.,
where the manipulation objects can be picked up-and-placed.
In Fig. 4 two of possible versions of the horizontal section of working space are shown.

Fig. 3. Scope depth: a - for vertical, b – for horizontal gripper orientation

231
Material infrastructure components corresponding to manipulator working space: a – for
vertical and b – for horizontal gripper orientation are shown in Fig. 5.

The software developed to calculate the working space presents a friendly graphical inter-
face. It allows the user an easy handling of the software and better results comprehension.
The program is developed using the kit of development of JAVA jdk1.6.0_14 and the envi-
ronment of integrated development NetBeans 6.7. The graphs of the system are created by
the plugin OPenGL and the bookstore JFreeChart. The used programming software is free
what allows us to distribute the code of the program in a free way.

Fig. 4. Versions of the horizontal section of the manipulator working space: a – for vertical, b – for horizontal
gripper orientation

Fig. 5. Material infrastructure components corresponding manipulator working space: a – for vertical, b – for
horizontal gripper orientation

232
!!!"#$%&'()*+,!(&)*-.*(/!+*01(/!/(#2'(&30

8$%'23')/& /),+0)$4E() *+& ,).'<)%'#,& ',& !, %$!3%!$*+& *,.'$#,0*,% & ' & )& %#"'3& #1& ',%*, '.*&
',.* %'<)%'#,7&C-*$*&)$*&+'11*$*,%&"-6 '3)/&%6"* &#1&)$%'23')/&/),+0)$4 &! *+&1#$&',+##$&),+&
%*$$* %$')/M&!,+*$5)%*$&),+&)*$')/&$#(#% &,).'<)%'#,7&8$%'23')/&/),+0)$4 M&)+#"%*+&%#&+' %',-
guish by visual sensors, presented in self-similar patterns, invariant to scaling, rotation, and
.'*5',<&),</*&3),& *$.*&) &)$%'23')/&/),+0)$4 &5-# *&+*%*3%'#,&',+'3)%* &%-*&"$* *,3*&#1&)&
/),+0)$47&C-*&,#%'#,&#1& */1E '0'/)$&/),+0)$4 &F>>VG&5) &2$ %&! *+&',&)&$#(#%'3&3#,%*D%&',&
AWB7&C-*&)!%-#$ &#(U*3%'.*&5) &%#&+*.*/#"&"/),)$&%)$<*% &%-)%&5#!/+&(*&+*%*3%*+&*) '/6&5'%-&
a standard perspective camera on a mobile indoor robot. Robust vision-based target recog-
nition by presenting a novel scale and rotationally invariant target design based on SSL was
',%$#+!3*+&',&AXB7&C-*&)!%-#$ &+* '<,*+&)&3'$3!/)$&/),+0)$4&5-*$*&%-*&',%*, '%6&' & */1E '0'/)$&
and anti-similar in all directions. They proposed a circular 3-pattern SSL target to estimate
the robot pose, but as it is well known and is mensioned by the authors, at most 8 poses will
be consistent with the such target observation.

Y# %/6&%-* *&%6"* &#1&/),+0)$4 & *$.*&%#&$#(#%&%#&+*2,*&%-*&+'$*3%'#,)/&',1#$0)%'#,7&>#0*&


#1&"$#"# *+&.' !)/&/),+0)$4 &)//#5 &%-*&$#(#%&%#&+*2,*&+*&+'$*3%'#,&),+&+' %),3*&%#&%-*&0)$4&
AZB7C#&+*2,*&$#(#%&/#3)%'#,&)& *%&#1&/),+0)$4 &' &! *+M&'1&%-*&!,'[!*&/),+0)$4&' &,#%&! *+&/'4*&
a beacon.

Visual marks with memory storage consisting of landmark part and memory part is proposed
',&A;:B7&C-*&/),+0)$4&")$%&' &%#&(*&* %'0)%*+&%-*&$*/)%'.*&"# *&(*%5**,&)&3)0*$)&#,&%-*&$#(#%&
(mobile manipulator) and the mark, and the memory part is to have information about what
it is, what tasks there are, and how to conduct the tasks. The memory part consists of QR
code, which is a kind of two-dimensional bar codes, and contains such information as object
'+*,%'23)%'#,7&8&3#+*&$*)+*$&' &!%'/'S*+&%#&$*)+&()$&3#+*&+)%)7&C-*&"# *&0*) !$*0*,%&")$%&3#,-
sists of a CCD camera, a lightning system placed on the manipulator and image processing
system. The marks for self-positioning are adequately disposed in the working environment.
The marks for manipulation are attached to all the objects of interest. The knowlege of the
relative pose between a camera on the robot and the mark, alows the robot to know the rela-
tive pose of the object with respect to the robot by measuring relative pose of the mark from
the robot.

The methodology of environmental support for autonomous mobile robots using visual

233
0)$4 M&"$#"# *+&',&A;:B&' &0#$*&%-*,&#%-*$&3/# *&%#&#!$&"$#"# )/&A@B7&C-*&+'11*$*,3*&' &',&%-*&
land mark type and in application of the operative memory to record the information about
the last state of the object of interest.

We introduce the simple model of a multifunctional informative landmark that could be


*) '/6&+*%*3%*+&),+&'+*,%'2*+&),+&%-)%&)//#5 &%-*&$#(#%&%#&+*2,*&'% &"# '%'#,&),+&#$'*,%)%'#,&
in 3D space with respect to the landmark coordinate system. It means that for known spatial
%)%*&#1&%-*&/),+0)$4&5'%-&$* "*3%&%#&)&</#()/&3##$+',)%*& 6 %*0M&$#(#%&3),&+*2,*&'% &</#()/&
position and orientation. In the same time, if the spatial state of some object is known with
respect to the robot coordinate system it is possible to calculate the spatial state of this object
with respect to the landmark coordinate system. After the next arriving to manipulate with
this object, robot can recalculate object space position from mark coordinate system to the
new position of robot coordinate system. This capability is important for multifunctional
service robots allowing them to use the recorded data about spacial position of the objects,
"/)3*+&(6&%-*&#,*&#1&$#(#% &+!$',<&%-*&"$*.'#! &)3%'#, M&%#&%)4*&'%&#$&%#&+*2,*&%-*&5#$4',<&
space locations ocuppied by objects and free space locations.

The landmark is a planar target that can be detected easily with a mono-camera located on a
manipulator or on a mobile indoor or outdoor robot. The camera has to be calibrated and the
")$)0*%*$ &#1&+' %#$%'#,&3#0"*, )%'#,&-).*&%#&(*&4,#5,7&J,&%-*&2$ %&*D"*$'0*,%)/&.*$ '#,M&
a landmark is composed of four circles (subtargets) that forms a rectangle with known side
/*,<%-&FK'<7&Q&)G7&8 &)&/),+0)$4&'+*,%'2*$&3),&(*&! *+&%-*&+' %),3*&(*%5**,&3#$$* "#,+',<&
circles, the combinations of circle area relations of the corresponding subtargets, and the co-
/#$&#1& !(%)$<*% &),+&()34<$#!,+7&C#&+*2,*&%-*&/),+0)$4&#$'*,%)%'#,&'%&' &"# '(/*&%#&+* '<,&
asymmetric subtarget positions.The selection of circles is not critical. Various types of sub-
targets can be used (Fig. 6 a, b, c). The landmark proposed is invariant to scaling, rotation,
and viewing angle that allows us to combine it whit self-similar landmark.

234
Fig. 6. Landmarks: a - of different colors and combinations of the area relations of the corresponding subtar-
gets, b, c - combined with self-similar landmark.

C-*&)/<#$'%-0&#1&'0)<*&"$#3* ',<&+*2,* &@I&3##$+',)%* &#1&%-*&3*,%*$ &#1&3'$3/* &5'%-&$* -


pect to the camera coordinate system and calculates the parameters of the transformation ma-
trix from the landmark coordinate system into the camera coordinate system and inverse one.

Fig. 7. Mobile robot PowerBot equipped with landmarks

Placing landmark on a top of the robot and using a network of external cameras will make it
possible to solve, for closed areas, the same navigation problem as is solved by the GPS navi-
gation systems for open areas (Fig. 7). The external cameras network can be used to organize
a cooperative dynamic behavior of a family of service robots.

235
Additional information associated with the landmark can be recorded in robot memory and
used for goal directed actions planning. It is a substantiation to name the offered type of mark
as “informative mark”. Combined with guide marks of a human being infrastructure, this
type of landmarks would serve for ensuring safety of both human being and robot commu-
nities. (Fig. 8)
The capability to relate the spatial parameters of the robot with respect to the landmark coor-
dinate system allows us to combine topological and metric maps that can simplify the task of
service autonomous robot navigation and docking at objective place.

In the Fig. 9 the manipulator actions of object grasping are presented. The pose of grasping

Fig. 8. Landmark combined with guide.

5) &+*2,*+&(6&3)/3!/)%'#,&#1&%-*&0)'"!/)%#$&"# *&5'%-&$* "*3%&%#&%-*&/),+0)$4&3##$+',)%*&


system and by using the information about the object of interest position with respect to the
landmark coordinate system.

4 Conclusion

The means for costructing the material and informative components of the robot environment
infrastructure proposed allows us to bring near the time of creation and practical use of mul-
tipurpose service robots. The future work will be directed to solving the problems of deve-
lopment of information support of multifunctional personal robots and to creating a parallel
robot world and adjusting it with human being world.

236
References

1. International Federation of Robotics, http://www.ifr.org, 2009.


2. G. Bekey and Y. Junku: The Status of Robotics. Robotics & Automation Magazine, IEEE,
2008, 15(1), pp. 80-861.
3. V. I. Rybak: Safety, uncertainty, and real-time problems in developing autonomous ro-
bots, Proc.of the 8th WSEAS Intern. Conf. on Signal Processing, Robotics and Auto-
mation, Cambridge, UK, 21-23.02.2009, pp. 12, 31-44 , ISBN : 978-960-474-05 4-3,
ISSN:1790-5117.
4. Intelligent Mobile Robotic Platforms for Applications, Research and Rapid Prototyping,
http://robots.activmedia.com.
5. SCHUNK GmbH and Co. KG, http://www.amtec-robotics.com
6. V. I. Rybak, G. Sigüenza Paz: Solución en forma cerrada del problema cinemático in-
.*$ #&")$)&*/&0),'"!/)+#$&+*&3#,2<!$)3'\,&C]C]]CM&Y*0#$') &+*/&P#,<$* #&+*&J, %$!-
mentación SOMI XX, León, Guanajuato, México, 24-28 de octubre de 2005, coautor G.
Sigüenza Paz, Clave VRXX106, ISBN 970-32-2673-6.
7. G Scharstein, A. J. Briggs: Real-time recognition of self-similar landmarks. Image and
Vision Computing, Elsevier, 2000, Vol. 19, pp. 763-772.
8. A.Negre, C. Pradalier and M. Dunbabin: Robust Vision-based Underwater Target Identi-
23)%'#,&^&_#0',<&= ',<&>*/1E>'0'/)$&V),+0)$4 7&8!%-#$&0),! 3$'"%M&"!(/' -*+&',&Q%-&
International Conference on Field And Service Robotics, Chamonix : France, 2007.
9. E. Celaya, J.-L. Albarral, P. Jiménez, and C Torras: Visually-Guided Robot Navigation:
K$#0&8$%'23')/& C#& R)%!$)/& V),+0)$4 M& J, %'%!%& +*& ]#(`#%'3)& '& J,1#$0`)%'3)& J,+! %$')/&
(CSIC-UPC), Llorens i Artigas 4-6, 08028 Barcelona, Spain.
10. J. Ota, M. Yamamoto, K. Ikeda, Y. Aiyama, T. Arai: Environmental support method for
mobile robots using visual marks with memory storage. Proc. IEEE International Con-
ference on Robotics and Automation, Vol. 4, 1999, pp. 2976 - 2981
11. Videre Design: DCAM and DCAM-L Digital Video Camera User’s Manual, 2003.

237

You might also like