You are on page 1of 91

A Self Model Learning Robotic Manipulator

Leading to Eye-Hand Coordination

By

Michael Jacob Mathew


Enrolment No: 30EE12A12005

Submitted in Partial Fulfilment of the Requirements


for the Degree of Master of Technology in
Mechatronics of Academy of Scientific and Innovative Research (AcSIR)

CSIR-Central Mechanical Engineering Research Institute


M.G. Avenue, Durgapur-713209, West Bengal
July 2014

CERTIFICATE
This is to certify that the thesis titled A Self Model Learning Robot Manipulator
Leading to Eye-Hand Coordination submitted in partial fulfilment of the requirements
for the award of degree of Master of Technology in Mechatronics of Academy of Scientific and Innovative Research, is an authentic record of the work carried out by me under
the supervision of Dr. S. Majumder at CSIR-Central Mechanical Engineering Research
Institute, Durgapur.
The work presented in this thesis has not been submitted to any other University/ Institute for the award of any degree.

4 July 2014

Michael Jacob Mathew

It is certified that the above statement by the candidate is correct to the best of my
knowledge and belief.

4 July 2014

Dr. S. Majumder
Chief Scientist
CSIR-CMERI
Surface Robotics Laboratory

ACKNOWLEDGEMENT:

It gives me extreme pleasure in upbringing the project and thesis for fulfilment of Master
of Technology in Mechatronics at Council of Scientific and Industrial Research-Central
Mechanical Engineering Research Institute Durgapur, West Bengal, India. I express sincerest thanks to Director CSIR-CMERI & DG CSIR for giving the opportunity to pursue my
masters programme with AcSIR. I am equally gratified to Prof. S.N. Shome, Dean, School
of Mechatronics for his continuous support and valuable suggestions.
I express my immense gratitude to my guide Dr. S. Majumder for his constant, advice,
revelation and dedicated support. He not only advised me on the technical aspects of my
research but also taught me the art of progressing, writing and presenting a research. My research work would have been futile without him. I am overwhelmingly accede the freedom
he has given me in this work as well invaluable opportunity to use all of his lab facilities.
I am indeed fortunate to have such an excellent advisor whose energy and enthusiasm for
research, galvanized me. I also thank Ms. S. Datta for her constructive and critical feedback on my research work. Equally important is Dr. Dip Narayans help in procurement
and making of my experimental setup without which I couldnt have completed this thesis.
Rachit Sapra, my colleague, batch mate and avid supporter cannot be endorsed in few
words. His work and suggestions helped me to improve my research in great deal. His
dedication and patience indeed motivated me to stay calm at times of failure and bleak situations. Manvi Malik, my senior batch mate and colleague helped me in organising this
thesis. The beautiful layout of this thesis evolved from her suggestions and feedback. I
thank her from the bottom of my heart since proof reading is not at all an interesting task.
Equally important is Bijo Sebastian, my junior batch mate whose timely support and assessment greatly abetted in the progress of this research. I would also like to express profuse
thanks to the colleagues in SR lab:- Suman Sen, Sukanta Bhattacharjee, Arijit Chowdhury
and especially Anjan Lakra and Nalin Paul for developing necessary hardware used for my
experimental test setup. The fact that I could rely on the support of Anjan and Nalin at any
time is indeed laudable.
Last but not the least I thank God whose conspiration was profoundly felt at stages
where progress was negligible which usually catapulted zero progress weeks to propitious
weeks. At this point I would also like to extremely thank my parents P A Mathew and
Sherly Jacob, my sister Mary Mathew and my most dearest, trusted friend Arya Sree for
their prayers, motivation and steady backing. These people take the entire credit of my
emotional well being and maintenance without which this thesis would have still stayed a
title.

Contents
Abstract

List of Figures

List of Tables
0.1 Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4
5

Introduction
1.1 Learning from human behaviour . . . . . . . . . . . . .
1.2 Eye-Hand Coordination in robots using human behaviour
1.3 Using visual feedback to estimate manipulator model . .
1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Organisation of Thesis . . . . . . . . . . . . . . . . . .
Motivation and Objective
2.1 Problem Statement . . . . . . . . . . . . . . .
2.2 Need behind self-modelling robotic manipulator
2.3 Various Challenges in the problem . . . . . . .
2.4 Summary . . . . . . . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.

.
.
.
.

Study of Existing Methods


3.1 Self Modelling . . . . . . . . . . . . . . . . . . . . . . . . .
3.1.1 Evolutionary Robotics Approach . . . . . . . . . . . .
3.1.2 Continuous Self Modelling Approach . . . . . . . . .
3.1.3 Self Recalibrating Approach . . . . . . . . . . . . . .
3.1.4 Human Inspired Self Modelling . . . . . . . . . . . .
3.1.5 Using Bayesian Networks for Learning Robot Models
3.1.6 Learning Based on Imitation . . . . . . . . . . . . . .
3.2 Eye Hand Coordination . . . . . . . . . . . . . . . . . . . . .
3.2.1 Using Discrete Kinematic Mapping . . . . . . . . . .
3.2.2 Principle of Automated Tracking . . . . . . . . . . . .
3.2.3 Principle of Visual Servoing . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.

6
6
8
9
9
10

.
.
.
.

12
12
12
14
15

.
.
.
.
.
.
.
.
.
.
.

16
16
17
17
18
19
19
20
20
20
20
21

3.3
4

3.2.4 Coordination Using Fuzzy Logic . . . . . . .


3.2.5 Calibration Using Active Vision . . . . . . .
3.2.6 Un-calibrated Stereo Vision in Coordination .
Summary . . . . . . . . . . . . . . . . . . . . . . .

Proposed Approach to the Problem


4.1 Approach . . . . . . . . . . . . . . . . . . . . . .
4.2 Steps in Detail . . . . . . . . . . . . . . . . . . . .
4.2.1 Acquiring Images and Manipulator Control
4.2.2 Image Processing . . . . . . . . . . . . . .
4.2.3 Curve Fitting and Finding DH Parameter .
4.2.4 Optimize DH Parameters . . . . . . . . . .
4.2.5 Validating the Model . . . . . . . . . . . .
4.3 Summary . . . . . . . . . . . . . . . . . . . . . .
Stereo Image Processing and Obtaining Raw Data
5.1 Stereo Vision Fundamentals . . . . . . . . . . .
5.1.1 Calibration of stereo camera . . . . . . .
5.2 Image Processing . . . . . . . . . . . . . . . . .
5.2.1 Image Segmentation . . . . . . . . . . .
5.2.2 Refining ROI: Morphological Operations
5.2.3 Computing Centroid of ROI . . . . . . .
5.3 Raw Data . . . . . . . . . . . . . . . . . . . . .
5.4 Summary . . . . . . . . . . . . . . . . . . . . .
Data Analysis
6.1 Statistical Learning . . . . . . .
6.2 Regression Analysis . . . . . . .
6.2.1 Circular Regression . . .
6.2.2 Elliptical Regression . .
6.3 Non-Linear Optimization . . . .
6.3.1 Nelder Mead Approach .
6.4 Simulation . . . . . . . . . . . .
6.4.1 Data generation . . . . .
6.4.2 Results of simulation . .
6.4.3 Comments on simulation
6.5 Summary . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

22
22
22
23

.
.
.
.
.
.
.
.

24
24
26
26
26
26
27
27
28

.
.
.
.
.
.
.
.

29
29
30
31
31
32
32
33
35

.
.
.
.
.
.
.
.
.
.
.

36
36
38
38
40
41
41
43
43
43
46
46

Experiment Setup
7.1 Hardware used . . . . . . . . . . . . .
7.1.1 The Invenscience LC ARM 2.0
7.1.2 Stereo Camera . . . . . . . . .
7.1.3 Other Hardware . . . . . . . . .
7.2 Detailed Steps of the Procedure . . . . .
7.3 Design of Control Design Software . . .
7.3.1 Software Architecture . . . . .
7.4 Design of Analysis Tool Box . . . . . .
7.5 Summary . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

47
47
47
48
49
49
50
51
53
54

Results and Discussion


56
8.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
8.1.1 Validation of model . . . . . . . . . . . . . . . . . . . . . . . . . . 57
8.2 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Conclusion and Future Work


65
9.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
9.2 Future Work and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

A Robotic Manipulators
A.0.1 Important Definitions . . . . . . . . . . . . .
A.0.2 Denavit Hartenberg Notation . . . . . . . . .
A.0.3 3D Transformations and Forward Kinematics
A.0.4 Inverse Kinematics . . . . . . . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

68
70
71
71
75

B Software Tools Used


78
B.0.5 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
B.0.6 Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Bibliography

80

List of Publications

85

Abstract
The evolution and behaviour expressed by natural intelligence has always baffled scientists
and researchers. This led to constant study of other animals including humans to understand and mimic similar behaviour in machines. Intelligence of a machine is brought by
a bunch of mathematical formulations which approximates the model of the machine and
the environment which it constitutes. Apart from this, the model of a robot is essential for
many reasons. The performance of every robotic system often depends on accuracy of the
model representing the actual system. Traditionally the robot model is fed into the system
by the designer and then later calibrated to the adequate amount. This work tries to present
a robotic system tie approach where model of the system is evaluated using a few basic apriori information and deriving the rest of the information using visual feedback. The ability
of the robot to self model will assist it to recover in case of partial damages as well as create
an opportunity to recalibrate the system using the on board sensors. The model thus studied can be further refined by more number of trials using a learning based approach which
could lead to perfecting the coordination between eye and hands of the robotic manipulator
system.
The thesis takes inspiration from the biological world for deriving self model of the
robot manipulator. The self model derived by the robotic system is used for developing eyehand coordination in the robotic manipulator. The core technique of eye-hand coordination
is the mapping from visual sensory to the robotic manipulator. The visual data makes the
open chain of robotic manipulator a closed chain. Many of the traditional ways developed
are for robotic manipulators in structured environment and has camera that overlooks the
entire workspace. An effective coordination between the visual sensor and the end effector
will help in manipulating object in case of partial occlusion also.
This thesis suggests an approach to self model the manipulator by tracking the end effector position of the manipulator under various actuation commands and then derives the
characteristics of each links and joints based on the information derived from the visual
data. The approach uses tools from statistical learning like regression and a numerical optimization technique. The approach was experimented using an Invenscience make robotic
manipulator using a Point Grey binocular camera. The result shows that it is possible to
generate an approximate self model of the system with reasonable degree of accuracy. The
model generated is validated by taking some random configurations with the manipulator.
1

List of Figures
1.1
1.2
1.3
1.4

An infant curious on his actions [50] . . . . . . . . . . . .


A player playing tennis [60] . . . . . . . . . . . . . . . .
A player hitting the baseball [32] . . . . . . . . . . . . . .
The Robonaut Robot by NASA undergoing calibration [5]

2.1

Overview of the problem statement . . . . . . . . . . . . . . . . . . . . . . 13

3.1
3.2
3.3
3.4

Algorithm outline of continous self modeling machines [10] . . .


Learning forward models of robots using Bayesian Networks [17]
M Xies Development approach [64] . . . . . . . . . . . . . . . .
Principle of visual servoing[38] . . . . . . . . . . . . . . . . . . .

4.1

Flow chart of the proposed solution . . . . . . . . . . . . . . . . . . . . . 25

5.1
5.1
5.2
5.3
5.4
5.5

Schematic view of a 3D camera system . . . .


Colour thresholding based image segmentation
Colour thresholding based image segmentation
Morphological Operations . . . . . . . . . . .
Raw data plots of LHS Camera . . . . . . . . .
Raw data plots of RHS Camera . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

29
30
31
32
34
34

6.1
6.2
6.3
6.4
6.5

Simulated Circle Data . . . . . . . . . . . . . . . .


Simulated Circle Data Continued . . . . . . . . . .
Simulated Ellipse Data . . . . . . . . . . . . . . .
Simulated Ellipse Data Continued . . . . . . . . .
Comparison with Gradient Descent and LM method

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

44
44
44
45
45

7.1
7.2
7.3
7.4
7.5
7.6

The InvenscienceLC ARM 2.0 . . . . . . . . . . . . . . . .


The Bumble Bee stereo camera from Point Grey International
The complete experimental setup . . . . . . . . . . . . . . .
The control software interface . . . . . . . . . . . . . . . .
Control Software Architecture . . . . . . . . . . . . . . . .
Serial Port Configuration [9] . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

48
48
49
51
52
53

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

6
7
7
8

18
19
21
21

7.7
7.8
7.9

Stereo Camera Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53


The OpenGL Module handling the creation of link model of manipulator . . 53
The Analysis toolbox designed for deducing information from raw data
produced by the manipulator . . . . . . . . . . . . . . . . . . . . . . . . . 54

8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
8.9
8.10
8.11
8.12

Screenshot of user interface software after calibration


Curve fitting result of DOF 1 . . . . . . . . . . . . .
Curve fitting result of DOF 2 . . . . . . . . . . . . .
Curve fitting result of DOF 3 . . . . . . . . . . . . .
Curve fitting result of DOF 4 . . . . . . . . . . . . .
Analysis ToolBox Image Segmentation . . . . . . .
Analysis ToolBox DOF 1 Result . . . . . . . . . . .
Analysis ToolBox DOF 2 Result . . . . . . . . . . .
Analysis ToolBox DOF 3 Result . . . . . . . . . . .
Analysis Tool Box DOF 4 Result . . . . . . . . . . .
Validation of the model learned:Trial 1 . . . . . . . .
Validation of the model learned:Trial 2 . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

57
57
58
58
59
60
61
61
62
62
63
64

A.1
A.2
A.3
A.4
A.5
A.6
A.7

An example of a Serial Manipulator [19] . . . . . . . . . . . . . . . . .


An example of a Parallel Manipulator[33] . . . . . . . . . . . . . . . .
An example of a Serial-Parallel Manipulator[33] . . . . . . . . . . . . .
The Denavit Hartenberg Convention . . . . . . . . . . . . . . . . . . .
Geometric transformations [49] . . . . . . . . . . . . . . . . . . . . . .
Transformation from local coordinate frame to global coordinate frame
A Puma 560 manipulator with coordinate frames [41] . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

68
69
69
71
72
73
74

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

List of Tables
8.1
8.2
8.3
8.4

Table showing coordinate locations of base joint . . . . .


Table showing link lengths learned after analysis . . . .
Table showing various joint angles learned after analysis
Table showing link offsets learned after analysis . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

56
59
60
60

A.1 DH Parameters of Puma 560 manipulator . . . . . . . . . . . . . . . . . . 73

Notations
The main abbrevations used in this thesis are as listed below.

0.1

Abbreviations

1. DOF - Degree of Freedom


2. PWM - Pulse Width Modulation
3. LSB - Least Significant Bit
4. MSB - Most Significant Bit
5. RHS - Right Hand Side
6. LHS - Left Hand Side
7. SDK - Software Development Kit
8. SMPS - Switch Mode Power Supply
9. V - Volt [unit of electric potential]
10. A - Ampere [unit of electric current]
11. ROI - Region Of Interest
12. API - Application Programming Interface

Chapter 1
Introduction
Animals including human beings evolved in both physical and mental abilities which made
them capable to survive in the dynamic environment of the earth. This evolution took several thousand years to become perfect and this immense ability to sustain and prevail in
unpredictable environment in a robust fashion influenced researchers and scientists to create similar machines that could make our life better. Thus the field of robotics emerged
with the glorious dream of creating machines that can learn, think and behave like animals
and ultimately humans[52]. This attempt to create artificially intelligent machines to match
natural intelligence motivates many researchers to study human behaviours, decision making and learning process. Finding an answer to questions on how human being think, learn,
coordinate and behave will possibly show a path to reach this goal. In this respect, how our
motor system is coordinated with the sensory system is a quite fascinating topic that has
been investigated by many researchers.

1.1

Learning from human behaviour

From an engineering perspective human body is a marvellous piece of highly robust carefully manufactured autonomous machine, a beautiful synergy of sensors, intelligence and
actuators.
The sensors are used to learn changes in
environment, intelligence analyses these
changes, infer information leading to decision making based on which the actuators(muscles) are triggered accordingly.
Though the most intelligent species on
earth, human infants are one of the least
capable offsprings at birth. They develop
their instincts, knowledge and behaviour
Figure 1.1: An infant curious on his actions
[50]
6

from the way they are nourished and nurtured. The influence of self learning is unquestionable where the human infants learn relations, patterns from their action and corresponding
reaction in the environment.
This work is primarily inspired by the human development from infancy. There are
several aspects of human behaviour that are interesting; however this work concentrates on
the development of the strong relationship between human hands and their eyes.
The tremendous relationship between their
eyes and hands has coined the phrase eyehand coordination in literature. Human infants does not know how each of their muscles and connected or how they function,
same is the case with their visual system.
But still they figure out a relationship between their eyes and hands in due course
of growth. First acquaintance of a human
infant with an object leads to observation
Figure 1.2: A player playing tennis [60]
and subsequent manipulations in numerous
ways (most of them being incorrect). Finally, the infant either learns by himself or observing a fellow humans ability to handle it correctly. Gradually key of correctly reaching and
holding that particular object is generated. In spite of several failures and confusions they
are able to continuously learn by making observations about their actions. Self-efficacy of
infants increases gradually by understanding/learning the relationship between their motor
behaviour, visual feedback and their interaction with the environment.
Self-efficacy can be defined as the possession of knowledge of how ones actions can
affect the environment[26]. Humans tend to react and manipulate an object based on this
knowledge. Usually, the way of grasping an object is determined on the fly if we have prior
experience in handling that particular object.
When games like tennis, badminton, and
cricket are played, the player only concentrates on the target (tennis ball/shuttle cock)
rather than his manipulator (hands). Figures 1.2, 1.3 and 1.4 shows this explicit relationship between our hands and eyes. His
hand gets automatically adjusted for a particular strike which he plans dynamically
based on the trajectory of the target. In literFigure 1.3: A player hitting the baseball [32] ature this is described as the eye-hand coordination of the player[24]. The player perfects this relationship through hours of practice (experience). This kind of self modelling
7

which results in the reach and positioning of an arm where the eyes are focused towards the
object only is tried in this work.

1.2

Eye-Hand Coordination in robots using human behaviour

Like humans, the robot manipulator should also be able to exploit the posture of manipulation as a solution to an obvious problem. It should be able to have an intuitive understanding
of the problem and find the solution to the problem using its prior experience of handling
similar problems before. When a new problem arises, it should be able to find at least an
approximate solution with potential to perfect it using feedback and add to its knowledge
base. Even a human can mishandle an object due to initial inexperience or lack of supervision. Humans are able to see these tasks as obvious problems because their brain can
correlate with similar experiences they had by making use of the immense knowledge they
attain in due course of life.It has been observed that an imperfect model of the system results in rather complicated control algorithm. A robotic system model can be imperfect due
to many reasons like noise, kinematic parameter errors, friction at the joints, elasticity of
links, backlash at the joints and other un-modelled errors.
By making the robot self model it attains the capability to correct the model
since mechanical imperfections can creep
into the system as time progresses. It is
often needed to recalibrate it to retrieve its
prior performance once the system goes out
of sync with its sensor measurements[47].
The performance of a robotic platform depends heavily on the system model that is
fed into its software. For the software the
entire system is a bunch of mathematical
equations which it manipulates to bring out
solutions to the current problem. The
presence of robots in unstructured and uncertain environments makes it complicated Figure 1.4: The Robonaut Robot by NASA
to obtain an accurate model which is in undergoing calibration [5]
sync with the onboard sensors. Self modelling can be an alternative to pre-programming
of the robot model since it uses direct sensory inputs[26]. Even after pre-programming,
robotic systems are usually recalibrated in accordance with the sensory input. For example, industrial manipulators are recalibrated with vision data after a specific set of cycles to
guarantee consistent performance.

1.3

Using visual feedback to estimate manipulator model

Although large number of works are done to make the robot capable of learning and modelling environment exist, relatively little is known on the capability of a robot to learn its
own morphology from direct observation. This work tries to bring about a human behaviour inspired self modelling of the robotic system. Here the robot learns the relationship
of its body by employing various actuations and then analyzing its effect on the environment. The software tries to make up a coarse model of the robot by repeated observations
and then refine the model further by minimizing the error generated between the predicted
posture using the coarse model and the observed model using an iterative procedure. The
method described here is rather an intuitive way of self modelling. This work concentrates
on trying to estimate the model of a manipulator using visual feedback. The typical approach of learning the kinematic model, then to calibrate sensors and then the effect of
both on the environment is avoided; rather the robot by itself derives the sensor-actuator
relationship by analyzing the result of its direct interaction with the environment. In this
work, the robot analyzes the trace of a marker attached to the end-effector produced by
tracking the marker using Image processing techniques. The trace of the end-effector will
be a partial curve of some standard curves like circle, ellipse or straight line depending on
the type of joint. Hence by finding the curve of which the trace is part will help in deriving
the necessary parameters required for the kinematic representation of the manipulator. The
kinematic representation thus obtained will contain a relationship between input and output and would help in incorporate all the mechanical uncertainties in usual modelling. The
relationship between input and output is derived by considering the manipulator as a black
box.

1.4

Summary

The field of robotics is moving towards creation of robust architectures similar to the biological machines like animals. The ability of humans to sense, learn and adapt using
various actions has motivated researchers to try out similar approaches in robots. The perfect coordination between human hands and visual sensors still stays as a holy grail in
robotic manipulator research. The relationship between the eyes and hands are derived in
due course of growth from infancy through observation of various action-reaction of themselves and from the environment. A similar approach to derive relationship without the use
of any prior approximate model between visual sensors (camera) and manipulators would
lead to solve many problems like the need of re-calibration, widespread usage of robotic
manipulator in human inaccessible unstructured environments and ultimately improve the
resilience and robustness of the system under slight damages.

1.5

Organisation of Thesis

The work tries to relate the action reaction of the manipulator by using various data analysis
techniques which are explained in detail in the subsequent chapters.
Chapter 2
Chapter 2 deals with the motivation and objective of this thesis work. The chapter explains about the problem statement, the need behind the self modelling robotic manipulator, various challenges to be addressed and study of various approaches that could solve the
problems. This chapter gives the possible subsections of the problem.
Chapter 3
Literature survey done in regard with the thesis title is presented in a concise fashion. Various existing approached for self modelling and eye-hand coordination is discussed. The
chapter has sections about evolutionary robotics to human inspired self modelling. The
main techniques that are used to bring eye-hand coordination between the robot manipulator and the camera which are used in industry as well as research are discussed.
Chapter 4
Chapter 4 gives an insight towards the proposed approach to solve the problem. The chapter
is a derivative of various literature survey and suggests a totally new way of self modelling
using an intuitive method and regression. The chapter also discusses various steps present
in the experiment to validate the model derived.
Chapter 5
One of the major parts of this project is the stereo vision processing for self modelling. A
stereo camera is the only sensor that is used to understand about the manipulator configurations. The chapter explains the fundamentals of stereo vision and calibration techniques of
stereo camera. The chapter further gives details on various image segmentation algorithms
that are employed in this work to derive data from the image.
Chapter 6
This chapter is consists of details of various statistical learning tools available and some
basic knowledge about the field. The chapter explains in detail the concept of regression
and algorithms specifically used in this work. The algorithms used are simulated using
generated data. The criteria used for the generation and validation are also discussed.

10

Chapter 7
Chapter 7 is about the experimental setup that is used to validate the method suggested
by this thesis. It also illustrates the type of hardware selected for the manipulator as well
as the camera. There are two different software that were designed to learn model of the
manipulator. This chapter gives details about the software architecture.
Chapter 8
This chapter is the important chapter in this thesis. The results section displays the screenshots of various outputs that were obtained during the experiment. The discussions section
shows the inferences derived from the method and validation of the learned model by the
manipulator.
Chapter 9
This is the last chapter of the thesis and it describes various techniques that can be used or
implemented to further improve the system. This chapter also contains a conclusion section
which summarises the entire work and results obtained.

11

Chapter 2
Motivation and Objective
Modelling of a robotic system is an age old problem. A robot system performs various
actions by manipulating its system model using various inputs and feedback measurements
from the sensors. Thus the degree of closeness of the system model to the actual system
determines the performance of the system. Following section explains various aspects of
self modelling of robot system and other objectives.

2.1

Problem Statement

Deriving entire model of a fully fledged robot system is out of scope of this work. This
thesis work concentrates on making a robot camera - manipulator system derives a relation
between its visual sensors and manipulator so as to achieve coordination between its eyes
and hands. The robot system observes the reaction of its manipulator on the environment
using visual sensors. Stereo camera is used as visual sensors since human learn and perceive
using their binocular vision for obtaining 3D information. Though absolute measurements
are not possible with human visual system, a relative measurement would usually suffice for
our visual cortex to learn from the environment. The problem statement can be illustrated
using the schematic shown in figure 2.1.

2.2

Need behind self-modelling robotic manipulator

Some of the aspects that motivates to solve this problem are listed below.
Unavailability of human intervention when robots are send on operation on remote
planets or space stations.
To survive itself from minor injuries in a foreign environment.
Re-calibration of robots in complex unstructured environments is not an easy task.

12

Figure 2.1: Overview of the problem statement


Helps towards the goal of making completely autonomous robots.
The robot system capable of re-modelling itself will be more robust.
The system can review its performance and modify accordingly so that problems like
wear, tear, friction and other mechanical constraints that accumulate in time can be
automatically incorporated.
The system can be made to perfect the model by evaluating its performance with each
action-reaction pair.
The robot system becomes closer towards their natural counter parts.
Huge amount of time can be saved.
The system automatically derives the constraints of its system.
As mentioned before robotic research started with the intension to make machines that
could either replace or help human beings in their daily tiresome chores. A system that is
capable to model and re-model itself would completely avoid the need of a human operator
or programmer. The system will be able to re-model itself in case of minor damages or
additional mechanical constraints that may arise in time. Present robot manipulator industry
manufactures manipulator for specific tasks but humans on the other hand use two hands for
all the tasks that are put infront of them. To impart similar behaviour in robot manipulators,
the system should be able to learn about itself, its environment and its constraints.

13

2.3

Various Challenges in the problem

There are several problems that are to be addressed to make the robot capable to learn from
its actions so as to infer its own model from it. The various challenges identified in this
problem that are to be individually solved are the following.
Study of existing solutions and methods: This involves extensive literature survey
of current approaches that are used to address similar problems. Equally important is
the need to thoroughly understand various basic concepts of robot manipulators and
stereo vision.
Selection of hardware: This problem involves market research and finding of a robotic
manipulator that can fit to solve your problem. The manipulator chosen should have
adequate number of sensors and could be remotely controllable. The remote controlling ability makes it easier for the programmer to sit far from the manipulator there
by avoiding any proximity hazards. Apart from the robotic manipulator study and
procurement of a stereo camera is also needed which will used for stereo visual feedback. The camera chosen should have minimum distortions and lens aberrations to
get images containing less noise. The success of the entire experiment depends on
the choice of the stereo camera and hence it is important.
Making hardware operational: This step involves study of data sheets, software libraries and sample codes so as to interface the camera and manipulator with a computer. Knowledge of interfacing hardware, programming is all required for this step.
Principles of computer vision: Images obtained from the stereo camera are to be
processed so as to obtain infer logically plausible information from the camera data.
This involves study and implementation of various image processing and computer
vision algorithms.
Elements of statistical learning: The raw data derived using various image processing algorithms are to be analyzed using learning algorithms to make the system model
from the visual data. Selections of algorithms or creating algorithms that can infer
such information are to be studied. For example for implementing curve fitting using regression there are multiple algorithms like gradient descent, newton-raphson,
lavenberg marquart etc. The choice of the algorithm depends on the performance
parameters the designer consider. Some performance parameters are speed of execution, accuracy of execution, repeatability etc.
Design a control software: Ultimately all the above steps has to be combined to form
a single software that can self learn the model of a manipulator. This step involves
selection of the programming language, programming platform and sufficient knowledge about command signals to the hardware are required.
14

2.4

Summary

The main problem statement of this thesis is explained in detail. The objective and motivation that are required to solve this problem is clear from the chapter. Solving the problem
consists of several sub-challenges. The chapter gives an overview of various problems that
are to be addressed to solve this problem.

15

Chapter 3
Study of Existing Methods
Kinematic and dynamic model of the robots own body are one of the most important
tools for control and manipulation of robots. The process of studying the basic underlying
mechanisms of the robot using its own actions on the world is known as model learning[46].
Availability of large number of data analysing tools in statistics allow straightforward and
accurate model approximation. The missing knowledge of the robots model is completed
by observing the robots action and reaction on the environment. This capability will help
the robot for self modelling in unstructured, uncertain and human inhabited environments
like space.
This thesis aims at achieving eye-hand coordination using self modelling approach. The
thesis objective can be divided into two main problems. They are (i) Self Modelling and
(ii) Eye-Hand coordination. Eye-Hand coordination can be stated as the ability to reach a
point seen by the camera in workspace of the robot using its manipulator. For a perfect eye
hand coordination the robot has to have a perfect model of itself. Since the robot is not
pre-programmed with the model, the robot has to derive the input-output relationship by
observing its own actions. This is known as self modelling or model learning.

3.1

Self Modelling

Self modelling is the approach involuntarily practiced by the natural world to learn relationship between their eyes and motor system. This model may not be in the form of an explicit
mathematical relationship between input and output as seen in the case of many robots.
Natural world counterparts un-learn, re-learn and improvise the relationship between their
eyes and hands as their experience in similar tasks increase. This is evident from the simple
example that a sports person who practices only excels in the field. This makes the natural counterparts to do a task in a different way or operate their limbs even after an injury.
Resilience and robustness seen in natural world is highly desirable for engineered systems
since such capability will ensure longer life and endurance. This thesis concentrates on just

16

robotic manipulator system capable of self modelling. Most of the industrial manipulators
are employed in structured environments for repetitive tasks, achieving robust performance
under uncertainty must also be addressed. A few of the relevant approaches are listed in the
following sub-sections.

3.1.1

Evolutionary Robotics Approach

This is one of the earliest approaches by researchers to bypass the problem of modelling
the robot controller due to the impossibility of foreseeing each problem the robot might
face[48]. Evolutionary robotics approach is also called automatic design procedure which
is based on the concept of survival of the fittest. The concept can be explained as deriving the overall body plan of the robot by progressively adapting to specific environment
and specific problems confronted through an artificial selection process that eliminates illbehaving individuals in a population while favouring the reproduction of better adapted
competitors[44].
The approach is implemented using evolutionary procedures like genetic algorithms[21],
evolution strategy[59] or genetic programming[37].The approach takes place in two steps.
Initially simulations on specific robot model performed are downloaded to the actual system to check their fitness. In some cases evaluations are performed on the actual robot itself
rather than simulations to validate the real world constraints. Finally the selected model is
used to modify the so called evolvable hardware[56].
Though evolutionary robotics is a good approach, one obvious problem with this approach is that it involves large number of trial and error and hence experimentation will be
expensive [28].

3.1.2

Continuous Self Modelling Approach

Josh Bongard et al. in his work [10] claims how resilience of machines can be improved
by continuous modelling. The work describes how an engineered system can fail at an
unexpected situation or damage and an approach to autonomously recover the system from
such damage. The work uses a four legged platform that derives its own model by using
its own actuation sensation relationships indirectly. The model thus learned is used for
forward locomotion of the robot. The platform is capable of self adaptation once a leg
part is removed. The algorithm developed consists of three stages. Self-model synthesis,
exploratory action synthesis and target behaviour synthesis. The algorithm employed in
this work is outlined in figure 3.1. The approach is on the concept of generating multiple
internal competing models and generation of actions to maximize disagreement between
predictions of these models.

17

Figure 3.1: Algorithm outline of continous self modeling machines [10]

3.1.3

Self Recalibrating Approach

Calibration is the technique to bring some modifications to the existing known model so as
to match certain performance metric like accuracy, precision and repeatability. Traditionally robots are calibrated by a human operator which later evolved to imparting ability to
the robot to calibrate itself based comparison of set point and sensor measurements. The
error between the set value and observed value is minimized modifying the known system
model.
Self calibration techniques can be classified into two categories. (i) Redundant Sensor
Approach (ii) Motion Constraint Approach. Vision based sensors are used for accurate
calibration in robots. Calibration of the robot robonaut by NASA(1.4) using visual feedback
is an interesting work. In this work[47] the robot is calibrated using a spherical marker
attached to the end-effector of the manipulator and visually monitoring the marker position.
A configuration required is set on to the manipulator which the manipulator achieves using
the existing known system model. Using the existing model the position the end effector
attains is predicted and the attained end effector position is measured using a stereo camera.
The error between the predicted and measured position is minimized by modifying the
robot model. The system automatically undergoes daily calibration procedures to meet the
performance requirements.
A similar work is done by Alemu et al. in [6] which involves self calibration of a camera equipped SCORBOT[54]. The work claims that there is no need of multiple precise
3D feature points on the reference coordinate system. The kinematic parameters of the manipulator are found by measuring the joint parameters by taking various trajectories which
are planned and simulated. Later the measured values are compared with the simulated
18

values to modify the system model. The paper uses a non-linear optimization algorithm
to improve the robustness. The usage of visual sensors avoids drawbacks of sensor based
approaches is that some of the kinematic parameters are dependent on the error model.
Such a dependency would result a cumulative error in the system model. The problem of
the motion constraint approach is that the error in locked joints may become unobservable.
Other similar works are done in [18], [45], [43] and [65].

3.1.4

Human Inspired Self Modelling

In [26] Justin W Hart et al describes a robotic self models inspired by human development.
The work concentrate on two aspects which are ecological self, that is the agents ability to
encompass its sensorimotor capabilities and self-efficacy, that is ability to impact the environment. The ecological self can be described as a cohesive, egocentric model of the body
and its relationship to environment arising from the coordination between the senses[55].
The sense of self-efficacy describes the casual relationship between the motor actions, body
and the environment.
The approach used for deriving the relationship is called kinematic learning in which
structure of a robot arm in the visual field of a calibrated stereo setup is used. The implementation of the principle is done on a 2 DOF manipulator of a humanoid robot that was
modelled using the concept of kinematic learning[27]. Since output is directly observed,
the error model of the system automatically gets incorporated into the system.

3.1.5

Using Bayesian Networks for Learning Robot Models

Antony Dearden et.al in their work titled learning forward models for robots [17] describes
the usage of a Bayesian Network for learning and representing the forward models of
robots. The approach is outlined in figure 3.2. The feedback about its motor system comes
from visual sensors the system posses. The system learns the model by observing the results
of the actuation as a result of sending some motor babbling commands.

Figure 3.2: Learning forward models of robots using Bayesian Networks [17]
19

3.1.6

Learning Based on Imitation

The approach deals with the ability to imitate an action performed by another robot or
individual in front of the system. Leitner et al. in [42] describes a method adopted to teach
a humanoid platform of name iCub to learn its model. The humanoid robot is trained using
another manipulator which moves an object to different locations that are not known apriori.
The locations are lies within the work envelope of the humanoid robot. The humanoid uses
a neural network [58] to relate its pose and visual inputs for object localization. In this work
also a prior model is available which is refined to a more accurate model using principles
of machine learning [8].
In short, self modelling would make the robot learn its model and recalibrate itself. The
model can be continuously modified so as to attain better performance with each iteration.
Most of this literature try to derive the system model by looking into the system as a black
box and deriving the model based on the input and observed output.

3.2

Eye Hand Coordination

The intention of this work is to achieve eye to hand coordination using a self modelling
approach. Eye-hand coordination is an important skill that a person perform effortlessly
[64]. The non- quantitative nature of human vision makes this skill that link perception to
action in an iterative manner.

3.2.1

Using Discrete Kinematic Mapping

M Xie in [64] describes a developmental approach for the robotics hand-eye coordination.
The work tries to develop a computational principle for mapping visual sensory data into
the hands motion parameters in the Cartesian space. Eye-hand coordination involves development of kinematic model of the manipulator and its inverse kinematic equations. M Xie
proposes a new method called discrete kinematic mapping, to directly map the coordinates
in Task space into the coordinates in Joint space at different levels of granularity which can
be gradually refined through the development. The approach is briefly illustrated by figure
3.3.

3.2.2

Principle of Automated Tracking

Another interesting work is done by Peter K Allen et.al in [7] in which an automated tracking and grasping of moving object with a robotic hand-eye system is performed. The work
focuses on three main aspects of eye-hand coordination which are (i) fast computation of
3D motion parameters from vision, (ii) predictive control of a moving robotic arm and (iii)
interception and grasping using the end effecter of the manipulator. Principles of optical
20

Figure 3.3: M Xies Development approach [64]


flow in computer vision, kalman filter concepts and probabilistic noise model employed
control algorithms were utilised for the task.

3.2.3

Principle of Visual Servoing

The principle of visual servoing initially finds the target location to be reached and incrementally move the manipulator to the desired location. Here tracking of the arm is
performed so as to correct the trajectory course while it moves towards the target. The
tracking module and the robot control module together forms a closed loop control in visual
servoing. Visual servoing is divided into two types, (i) Eye-In-Hand and (ii) Stand-Alone.
Eye-In-Hand systems has a camera attached to the end effector and it reaches the goal while
stand alone system has a camera that overlooks the manipulator and the target. In case of
stand alone systems, the work envelope of the manipulator is limited by the visual field of
the camera. There are also methods which use two cameras where one camera is on the end
effector while the other is overlooks the workspace. These are shown in figure 3.4

Figure 3.4: Principle of visual servoing[38]

21

3.2.4

Coordination Using Fuzzy Logic

Sukir et al. in their work [39] claims the use of Fuzzy logic for developing coordination
between eyes and hand of a robotics manipulator. The eye-hand coordination of a robotic
manipulator is described as an ill posed inverse mapping problem. The algorithm implemented uses inverse homogeneous transformation technique to find the pose of the work
piece with respect to the world coordinate system given the pose of end effector position,
the 2D image with the work piece position, camera parameters and pose of robots endeffector.
Usually a change in the camera position or manipulator position demands re-calibration
of the entire system. To avoid this, the authors calculated the projective transformation at
two extreme vertical position of the camera (the range of vertical motion possible) and used
Fuzzy logic to determine the change in coefficients. The implementation of Fuzzy logic
consists of (i) determination of the fuzzification input (the squared distance between two
known points in the image) (ii) selection of fuzzy inference rules for reasoning([36]) and
(iii) defuzzification and computation of outputs (calculation of the required value of coeffiecients.) The algorithm though claims to remove the need of re-calibration, the method
can only be used in case of a vertical displacement of the camera.

3.2.5

Calibration Using Active Vision

Principle of active vision is similar or same as visual servoing. In this method multiple
cameras (usually greater than 2) are used to overlook the workspace as well a camera is
attached to the end-effector. All the cameras that overlook the workspace is calibrated and
fixed. The advantage of using multiple cameras is that it avoids visual occlusion and also
gives better trajectory planning capabilities. The disadvantage of this system is that the
mobility of the manipulator platform is not possible, that is the base of the manipulator
should be in a fixed location. This approach is used in industries where robots are used for
manufacturing jobs. The last image in figure 3.4 is an example of active vision. The same
principle is used in the calibration of robonaut by NASA[47]. Current industrial manipulators undergo calibration using active vision after a particular interval of task execution.
This is very much needed in case of precision placements and repetitive tasks. This method
well suited for structured indoor environments.

3.2.6

Un-calibrated Stereo Vision in Coordination

Nicholas H et al. in [31] implements an affine stereo algorithm to estimate position and
orientation to be reached. Affine active contour models are used to track real time motion
of the robot end-effector across the two images. The affine contours used are similar to a
B-spline snakes with discrete sampling points. The shape of the object itself is used as the
22

tracking feature. In the algorithm implemented, at each time step the tracker moves and
deforms to minimize the sum of squares of offsets between the model and image edges.

3.3

Summary

Although some approaches exist to incorporate the changes into the model using some
parametric identification, it is still difficult in the case of complex machines. To sum it
up it can be said that Model learning can be a better substitute to manual re-programming
for eye-hand coordination of the robot since the robot tries to derive the model from the
measured data directly. Model learning can be divided into two sections. (i) Deducing
behaviour of the system from observed quantities. (ii) Determining how to manipulate
the system from the inferred data [46]. Many of the relevant approaches that existed in
literature is quoted above. The human inspired self modelling shows a clear departure from
the traditional model learning paradigms present and is thought to be fruitful if pursued.
Eye-Hand coordination is still considered to be a task that it to be perfected since a
long gap stands in comparison with the natural counterparts. The problems of mechanical
constraints pose a serious limitation and vouches for the need of re-calibration after certain
number of cycles. One important thing to be noted is that many of these methods deal with
manipulators that are either fixed or belonging to a structured environment and hence most
of these methods cannot be implemented in un-predictable environments.
All these studies calls for a need to develop a new way to acquire eye-hand coordination
in robotic manipulator - vision systems which would improve their resilience, robustness
which could be a step towards the attainment of singularity.

23

Chapter 4
Proposed Approach to the Problem
From the literature survey section it is clear that black box imitation learning can at best
produce the desired behaviour [46]. Deriving a relationship between the input and output
without seeking details of the internal system will automatically take into account the mechanical constraints, friction and other parameters which are not taken or approximated in
usual modelling.
The following section deals with the important theory behind the approach that is used
in this paper. The approach is based on the assumption that most common joints in a robot
manipulator are either Rotary (R) joint or Prismatic (P) joint. Other joints like cylindrical
joint ( R P) or spherical joint (R R R)[35] can be expressed as a combination of
these joints. When a link attached by an R joint or P joint moves, the trace of the tip
of the link will either be a circle (or ellipse) or a straight line. To find the parameters of
the part curve produced by the trace, one should have some curve fitting techniques like
regression analysis. The necessary values that are required to learn the coarse model of
the manipulator can be obtained from the best fit curve to the raw data produced from the
trace of the tip of the link. The end effector of a manipulator can be related to the base
of the manipulator using either the Denavit Hartenberg (DH) convention or the Power of
Exponents. This work uses DH convention since it has much lesser parameters compared
to the Power of Exponents. The details of the methods employed in this work are explained
in detail as follows.

4.1

Approach

The approach suggested by this thesis is illustrated in figure 4.1. The solution starts with
operating the robotic manipulator using specific control commands wirelessly transmitted
to it and simultaneously recording the changes of the end effector position using a stereo
camera. After the acquiring of the images, the images are processed to obtain the manipulator end effector position which is used as input to the curve fitting section which tries to

24

find out the relationship between the raw data points.

Figure 4.1: Flow chart of the proposed solution


Using the curve fitting results and stereo vision principles 5.1, various DH parameters
A.0.2 are derived. Using the DH parameter of the manipulator a coarse model of the system
is formed. The coarse model formed is optimized using a numerical optimization technique
to derive a finer model. The model is finally validated by taking random configurations.
25

4.2

Steps in Detail

Detailed explanation of the various steps involved in the approach is as described in subsequent sections.

4.2.1

Acquiring Images and Manipulator Control

Since the manipulator works in a 3D world, concept of depth perception is very important
and hence a stereo camera needs to be used which provides binocular vision. The camera
should also be calibrated so that the real world values can be computed from the images.
Camera calibration would provide values like centre of the image, focal length of the lens,
distortion parameters and the extrinsic parameters between the two cameras of the stereo
rig.The manipulator is to be controlled wirelessly so as to avoid any proximity hazards for
the programmer. Only one joint of the manipulator is moved at a time so that traces of the
end-effector to that particular curve can be obtained.

4.2.2

Image Processing

The images obtained from the camera are to be processed to obtain useful information from
it. The images has to be undistorted, rectified, pre-processed and segmented to obtain the
tracker position attached to the end effector of the robot manipulator. The centroid of the
segmented marker is computed to obtain the centre location of the tracker in the image. The
image locations from both left and right camera are used to compute the 3D world position
of the manipulator end-effector.

4.2.3

Curve Fitting and Finding DH Parameter

The raw data obtained from the image processing section has to be analyzed using various statistical learning tools available so as to derive relationship between the data. As
mentioned earlier, a revolute joint moves in a circular fashion and hence the path taken by
the end-effector would be a part of a circle. If the manipulator end-effector is moving in
a plane parallel to the image plane, then, the image trace of the manipulator end-effector
will have a circular trace. If the end-effector is moving in a plane that is non-parallel to the
image plane, then the image trace will have an elliptical shape. So by using circular and
elliptical curve fitting, one can deduce the complete parameters about the curve formed by
the manipulator. This would used to derive the DH parameters of the manipulator.
Checking Validity
Using the details provided in A.0.2, the transformation matrix can be calculated as follows.
This matrix was used to The general transformation matrix from frame i to i-1 can be given
26

as equation 4.1 [35].

cos i sin i cos i sin i sin i ai cos i

sin

cos

cos

cos

sin

a
sin

i
i
i
i
i
i
i
i1

Ti =
0

sin

cos

d
i
i
i

0
0
0
1

(4.1)

The details on how to derive this transformation is discussed in Chapter 2. The parameter
with subscript i are of the frame i and with subscript i-1 are of frame i-1. ai is the link length
of link i and di is the frame offset of frame i from frame i-1. The inverse transformation
from frame i-1 to frame i is given by equation 6.21.

cos i
sin i
0
ai

sin

cos

cos

cos

sin

d
sin

i
i
i
i
i
i
i
i

Ti1 =
sin sin

cos

sin

cos

d
cos

i
i
i
i
i
i
i

0
0
0
1

(4.2)

By substituting various manipulator values computed into the above matrix and using the
transformation fundamentals, the validity of the model can be checked. By giving commands to the manipulator, predicting the 3D location using the internal sensor readings and
comparing the actual position reached by measuring the end effector position will give the
validity of the model.

4.2.4

Optimize DH Parameters

The DH parameters obtained in the previous section will provide only a coarse model of
the manipulator due to the inclusion of noise to the data. The DH parameters obtained is
optimized and refined by using a numerical optimization technique. The previous subsection illustrates how to check the validity of the obtained coarse model. The computed error
between the predicted value and observed value is to be minimized to refine the DH table.
The value is minimized by modifying the DH parameters so as to reduce the error. This
step would produce a highly refined result within the tolerance limit.

4.2.5

Validating the Model

The model finally obtained is to be validated. Initially the experiment started with no knowledge of the DH table of the manipulator. The final section provides the DH parameters that
are required to describe the manipulator. The model can be validated by making the manipulator reach random goal locations using the obtained model. The random locations can
be supplied using a marker attached to a rod that could be moved in the work space by the
programmer which is identified and reached by the manipulator end-effector.
27

4.3

Summary

This chapter discusses the proposed solution to the problem posed in the previous chapters.
The chapter also explains various steps that are required to implement the experiment to
obtain the model of the manipulator. The experiment is started with a few basic knowledge
on the manipulator like the number of joints present and type of joint present and with
no knowledge on the DH parameters of the robot. The steps explain how the suggested
approach can lead to a solution more of which is discussed in the later chapters.

28

Chapter 5
Stereo Image Processing and Obtaining
Raw Data
5.1

Stereo Vision Fundamentals

Animals including humans see the world through sophisticated visual system. The two eyes
separated by a distance enables the brain to accurately produce a 3D representation of the
world [14]. This is the basis of various 3D imaging systems also which can be seen from
figure 7.2. Stereo vision works with the basic assumption that there is considerable overlap
between the images formed on both the left and right camera. The principle behind the
working of stereo vision is that points on a scene seen by both cameras will be experienced
differently due to their difference in relative position. The image in one frame, say the left
frame will be shifted by a few pixels compared to the image in the right image plane. This
difference between the corresponding pixels on both images of the same 3D world point
is called the disparity of the pixel. By calculating a disparity map for the entire image,
the relative depth map of the world can be calculated provided one knows the internal

Figure 5.1: Schematic view of a 3D camera system


29

and external parameters of the camera. Depth of a 3D world point is calculated using the
equation 5.1.
fT
(5.1)
Z=
d
where, f is the focal length of the camera in pixels, T is the base line between the two
cameras in meters and d is the disparity of the given point in pixels.
To find the image correspondences between the pixels in both images, several pre processes
has to be done like image un-distortion, image rectification and image alignment [12].

5.1.1

Calibration of stereo camera

The mathematical model that is used to describe image formation is called weal perspective[12].
Imperfections in the camera lens leads to generation of distorted images which if not corrected can give false readings in inference. This avoided by calibrating the camera. The
process of finding intrinsic parameters and extrinsic parameters of the camera to correct
the images is known as camera calibration. Intrinsic parameters include lens distortions
like radial distortion, tangential distortion and spherical aberations. Extrinsic distortions
consist of external camera parameters like translation and rotation. In case of a stereo

(a) Left Image

(b) Right Image

(c) Left Image

(d) Right Image

Figure 5.1: Colour thresholding based image segmentation


camera, both cameras present are to be calibrated. Usually camera calibration is performed
by using a checker board which is used to capture left and right camera images. Then using
the algorithm described in [30] various parameters are derived. A MATLAB [23] tool box
30

is available for stereo calibration which can be found in [11]. Few of the images captured
while calibration is shown in figure5.1.

5.2

Image Processing

The obtained images through the stereo camera need to be processed to infer data from
the same. Processing is done in three steps. First is a pre-processing step that involves
undistorting and rectifying the camera images. Second is the filtering and smoothening of
the images. Third is the segmentation of the images and finally the calculation of centroid
of the segmented image to obtain the position of the marker attached to the end-effector.
Image smoothing is performed to remove or suppress the high frequency noise present in
the system. A median filter is used for effective removal of salt and pepper noise.

5.2.1

Image Segmentation

Segmenting an image means extracting out a region of interest from an image. Several
methods exist for segmentation of images [51]. All methods rely on some prominent feature of the image like colour, texture, scale invariant features like corners, edges, entropy
changes etc. Since segmentation is not the chief aim of this thesis, a relatively simple
method was chosen for the same.
In this thesis, data from the image is derived using simple colour thresholding techniques. Since RGB domain colour encoding is more external light intensity dependant, a
less susceptible domain like the HSV colour encoding domain is used. An example of this
method is illustrated using figure 5.2. The technique can be illustrated with the equation
5.2.

1 i f I(x, y) > T hreshold value
I(x, y) =
(5.2)
0 otherwise

(a) Image

(b) HSV Image

(c) Segmented Image

Figure 5.2: Colour thresholding based image segmentation

31

5.2.2

Refining ROI: Morphological Operations

Morphology in image processing is based on set theory. These operations are done on
binary images. Sets in images are the objects. Erosion and dilation are two fundamental
operations in morphological image processing.
Erosion:With A and B sets in Z 2 , the erosion of A by B, denoted by equation 5.3
A B = {z|(B)z A}

(5.3)

This means that erosion of A by B is the set of all points z such that B, translated by z,
is contained in A B is usually a structuring element and it doesnt share any information
with the background. Erosion can mathematically represented by equation 5.4.
A B = {z|(B)z A = }

(5.4)

where, A is the compliment of A and is the empty set.


Dilation: Dilation can be defined as follows. With A and B as sets in Z2 , the dilation of A
by B, denoted , is defined as equation 5.5.
A B{z|(B)z A 6= }

(5.5)

Morphological closing is another operation in image processing where dilation followed by


erosion using the same structuring element is done. An illustration can be seen in figure
5.3c.

(a) Thresholded Binary Image (b) Image after morphological(c) Image after morphological
close operation
open operation

Figure 5.3: Morphological Operations

5.2.3

Computing Centroid of ROI

The end-effector of the manipulator should described by a single point in the image. Since
the segmented image contains more than one pixel, the centroid of the image is calculated
to denote the exact position of the end-effector. The centroid of the segmented image
32

is calculated by computing moments of the image. Moment of an image is defined as a


weighted average of the pixels contained in the image. Mathematically for an image, it can
be represented as equation 5.6.
Mi j = xi y j I(x, y)
x

(5.6)

The centroid is computed by calculating the central moment of the segmented image (5.7).
pq = x y (x x) p (y y)q I(x, y)

5.3

CentroidX =

M10
M00

CentroidY =

M01
M00

(5.7)

Raw Data

Before using any algorithm to find the pattern in any raw data, it is always better to analyze
the raw data manually. This may not be the case always since if the amount of raw data is
huge it is literally impossible to find a pattern in the data or comment on the correctness
of data using manual observation. But here the number of data is less since there are only
maximum 41 data points per joint. So below are the raw data plot obtained for the tracking
of data. The x axis contains the row data of image pixels and y axis contains the column
data of the image pixels.
Figure 5.5 consist of images that are taken form RHS camera of the stereo camera while
figure 5.4 consist of images that are taken from LHS camera of the stereo camera.
Comments on the raw data obtained
Since each of the joints present in the manipulator are revolute joints, we expect arc as the
trace of the marker. The arcs can be a part of a circle or an ellipse. A circle when viewed
in a skewed plane compared to the image plane of the camera, it appears to be an ellipse.
(a): The raw data obtained shows the trace of the end effector marker by the left and right
camera of the stereo rig. DOF 1 data from both the camera look like they are points on an
incomplete ellipse.
(b): The raw data from DOF 2 from both camera look like they are a part of a circle.
(c): The raw data from DOF 3 from both camera look like they are part of a circle.
(d): The raw data from DOF 4 from both cameras also look like they are part of a circle.
The raw data obtained contains less amount of noise and is expected to fit with various
curve fitting methods explained in the previous chapter.

33

(a): DOF 1 Data

(b): DOF 2 Data

(c): DOF 3 Data

(d): DOF 4 Data

Figure 5.4: Raw data plots of LHS Camera

(a): DOF 1 Data

(b): DOF 2 Data

(c): DOF 3 Data

(d): DOF 4 Data

Figure 5.5: Raw data plots of RHS Camera


34

5.4

Summary

Fundamentals of stereo vision are explained with the need to calibrate and method of calibration to derive useful information from the image. The images acquired are segmented
using simple thresholding technique and the centroid of the image is calculated by computing the moments of the image. The centroid thus calculated depicts the position of the
end-effector of the robotic manipulator in the image. The raw data plots obtained are plotted
against X and Y coordinates. The data visually looks similar to standard curves of ellipse
and circle. Further analysis on the image can be found in later chapters.

35

Chapter 6
Data Analysis
6.1

Statistical Learning

The usage of various tools of statistics like regression, classification and clustering for
qualitative and quantitative analysis is known as Statistical Learning. This domain can be
classified into supervised learning and unsupervised learning. Supervised learning can be
defined as predicting or getting inference from a set of inputs and corresponding outputs.
Unsupervised learning can be defined as deriving structure or relationship in the input data
without using a supervisory output. All analysis problems of statistics can be collectively
classified into three. (i) Regression Problem (ii) Classification Problem or (iii) Clustering
Problem.
Supervised learning involves estimation of a function that approximates the actual function using input observations and corresponding outputs. The parameters on which the data
depends are called predictors or variables. Suppose there are p variables which can be denoted as X1 , X2 , X3 , , Xn for output Y then the actual relationship between the input and
output can be denoted as 6.1
Y = f (X1 , X2 , , Xn ) +

(6.1)

where, epsilon denotes the error in the funtion which is taken with zero mean. This function
can be estimated by Y = f(X1 , X2 , , Xn ). The degree of closeness of f(X1 , X2 , , Xn )
to 6.1 determines the performance of the model. The observations and outputs that are
used to estimate the function is called the training data and the inputs used to predict or
estimate its outputs are called the test data. The model is derived using training data and
accuracy of the model is evaluated using the test set. Supervised learning mainly consists
of the regression problem and classification problem [29]. The Regression problem can be
defined as prediction of a quantitative value based on an estimated model. For example
we have to predict the exact wage of a person using features experience, qualification etc.
The classification problem deals with qualitative inference from the data. For example
36

survival of a patient due to cancer based on the data of survival and death of previous
cancer patients. Here we are interested only in the categorical aspect of whether the person
will die or survive and not in some exact numerical value.
Unsupervised learning involves deducing pattern or relationship in the data so as to
group them into different clusters. These enables in finding groups of similar characteristics
together. The problem with clustering is that there is no way to know the number of clusters
prior. Here we are not predicting or inferring any information from the observation. For
example to know which genes cause cancer, clustering algorithms are used to group genes
that have similar gene expression.
This thesis involves only few techniques of supervised learning which involves estimation of a function. The function estimation is done using two methods in general.
The first method is called parametric method and the second method is called the nonparametric method. Parametric method involves two steps. The first step is to take an
assumption about the possible function that will best estimate the given data. For example
if the analyst think linear model will best fit his data then he can choose an equation like
Y = 1 X1 + 2 X2 + + n Xn . Now estimation of the function has reduced to finding parameters 1 to n . The second step involves estimation of the parameters using the available
training data. Non-parametric methods involve no assumption and it uses several functions
to find the best suited function for the input data. Parametric methods are simpler since a
function model is already assumed but it has a disadvantage that the chosen model may not
adequately fit the data[29]. Non-parametric methods has the advantage that it will estimate
a function that comes very close to the actual function (within the allowed tolerance limit)
but can be computationally expensive due to lack of an idea of the best fit function.
As mentioned, the performance of the estimated function depends on the mean squared
error calculated between the data and the function approximated. The mean squared error
consists of three components. The reducible error, the bias and the variance. Reducible
error is the deviation from the actual function and approximated function. This is called
reducible error because choosing a better approximation technique can result in a lesser error. Details of some relevant approaximation techniques are covered in Chapter 5. The bias
and variance together are called irreducible errors since we cannot completely eliminate
them. Variance is defined as the deviation that will arise in the approximated function when
a different training set is used. Ideally this difference should be very low. But many a time
flexible functions are chosen for better fit to the data. Highly flexible function approximations will have larger deviation since they are highly sensitive to the errors in training data.
Hence a slight variation in training set can result in a different approximated function. In
short more the flexibility of the funtion, more is the variance of the function. Bias is the
error that gets introduced when a real life problem is approximated by a model. Approximation always has errors and this is reduced by choosing functions that are more flexible so
that it fits the data more precisely. It is seen that as flexibility increases bias reduces. The
37

rate of decrease of bias is greater than the rate of increase of variance.


In short the total error present in the system will depend on flexibility of the function.
However the actual performance of the approximated function is to be done by calculating
mean squared error of the test data rather than the training data. Though bias reduces
with increase in flexibility of the model, it is observed that this influence does not last
forever. The effect of flexibility becomes constant beyond a point and at this time the error
is influenced by increasing rate of variance. Hence flexibility of the function is chosen as
an optimized value based on the relative rate of change of variance and bias.

6.2

Regression Analysis

Regression analysis can be defined as a statistical tool that is used to derive relationship
between variables from a set of data. We use regression analysis for curve fitting. There
are different types of regression analysis based on the type of curve we try to fit into the
available data. Here we use circular regression and elliptical regression.

6.2.1

Circular Regression

The most accurate and robust fit minimizes geometric (orthogonal) distances from the observed points to the fitting curve. Fitting a circle to scattered data involves minimization
of[15]
n

q
f (a, b, R) = [ (xi a)2 + (yi b)2 R]2

(6.2)

i=1

where where, xi , yi , a, b and R represent the data x-coordinate, y-coordinate, centre coordinates (a,b) and radius respectively. This is relatively simple expression and we use Levenberg Marquart (LM) procedure for fitting the circle. The algorithm can be explained as in
1 [15].
Algorithm 1 LM procedure of fitting circle
1: Initialize (a0 , b0 , R0 ) and , compute F0 = F(a0 , b0 , R0 )
2: Assuming that ak , bk , Rk are known, compute ri , ui , vi for all i
3: Assemble the matrix N and the vector J T g
4: Compute the matrix N = N + I
5: Solve the system N h = J T g for h by Cholesky factorization
6: If khk/Rk < (small tolerance), then terminate the procedure
7: Use h = (h1 , h2 , h3 ) to update the parameters ak+1 = ak + h1 , bk+1 = bk + h2 , Rk+1 =
Rk + h3
8: Compute Fk+1 = F(ak+1 , bk+1 , Rk+1 )
9: If Fk+1 = Fk or Rk+1 6 0, update and return to Step 4 otherwise increment k,
update and return to Step 2

38

ak , bk , Rk denote the centre coordinates and radius of the kth circle respectively. ri is the
distance of the ith data point from the curve, ui and vi are the partial derivative of (6.2) with
respect to a and partial derivative of (6.2) with respect to b respectively. where,
ri =

q
(xi a)2 + (yi b)2

ui =

ri (xi a)
=
a
ri

ri (yi b)
=
b
ri

uu uv u

N = uv vv v
u v 1

u1 v1 1
.
..
..
..
J=
.
.

un vn 1

Ru + a x
Ru ur

J T g = n Rv vr = Rv + b y
Rr
Rr
vi =

g = [r1 R, rn R]T

(6.3)
(6.4)
(6.5)

(6.6)

(6.7)

(6.8)

(6.9)

ni=1 u2i
uv =
(6.10)
n
The performance of the LM method depends on its initial estimate. LM utilizes the fast
convergence of Newton-Raphson method and stability of Gradient Descent method. The
initial estimate is found by the method explained in 2
Algorithm 2 Algorithm for finding initial estimate of the circle
1: For every consecutive three points in the data set, do the following ...
2: Check whether three points chosen are collinear
3: If not collinear calculate the circumcircle centre of the triangle formed by three vertices
4: Store the centre coordinates, substitute it in the standard equation of the circle to get
the radius.
5: The x,y coordinate and radius are stored in three arrays.
6: Repeat it for n P3 combinations where n is the number of data points.
7: Mean of all x coordinates, y coordinates and radius is returned

39

6.2.2

Elliptical Regression

The revolute joints can produce an elliptical end-effector trace if the end effector is moving
in a plane that is not parallel to the camera plane. Elliptical Regression analysis is used to
fit an ellipse to the elliptical trace of the end-effector. An ellipse can be described by the
following general conic equation [61].
am21 + bm1 m2 + cm22 + dm1 + em2 + f = 0

(6.11)

Where, (a, b, c, d, e) such that a2 + b2 + c2 > 0 and discriminant = b2 4ac < 0.


Equation of the ellipse in matrix form can be written as:
T u(x) = 0 with the constraint T F > 0
where,

0 0 2 0 0 0

0 1 0 0 0 0

2 0 0 0 0 0

F =
(6.12)

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0
u(x) = [m21 , m1 m2 , m22 , m1 , m2 , 1]T

(6.13)

= [a, b, c, d, e, f ]T

(6.14)

The minimum value of represents the best fit ellipse of the data point which can be found
by minimizing the cost function described in (6.15). An ellipse can be fit into the data thus
produced by an ellipse fitting algorithm proposed by Zygmunt L Szpak et al. in [61]. The
algorithm also uses LM algorithm for optimizing the merit function and a seed value using
the algorithm described in [61]. This method is a combination of algebraic and geometric
fit. The algorithm can be summarized is given in 3.
P( , ) = JAML + g( )

(6.15)

where,
N

T An
T
n=1 Bn

(6.16)

k k4
( T F )2

(6.17)

An = u(xn )u(xn )T

(6.18)

Bn = u(xn )xn u(xn )T

(6.19)

JAML =

g( ) =
where,

40

xn is the covariance matrix describing the uncertainty of the data points xn . is a very
small value in the range of 1015 .
Algorithm 3 Algorithm for finding initial estimate of the ellipse
1: Set tolerance limit, initial seed using [25]
q

An
Compute residual matrix r( ) = [r1 ( ), , rN+1
where, rn ( ) =
T Bn
p
rN+1 ( ) = g( ){now, P( ) = r( )T r( )}
3: Compute Jacobian matrix of r( ), and f ind P( )
4: Compute Hessian matrix of P( ) {2 P( )} with Gauss-Newton Approximation
5: Repeat the following till JAMLk+1 JAMLk < tolerancelimit then, k+1 = k + k , k =
(2 P( ) + k I6 )1 (0.5 P( )) where 6 is non negative scalar

( )]T

2:

For P joint the trace produced by the end-effector will be a straight line which can be
found by fitting a line segment. This is not implemented in this work since the manipulator
with us didnt posses a P joint. The obtained fit curve after regression represents the ideal
curve the end-effector would have followed. After the data is fit, the centres of the curves
produced are obtained which represents the point about which a particular joint turned to
produce that particular trace of the end-effector. These points are back projected to the 3D
plane using stereo vision principles explained in [22] so as to calculate the length of the
links of manipulator. The minimum and maximum angles by which each link can turn is
determined by calculating the minimum and maximum angles between the links and the
beginning and end of the data points of each DOF. The parameters thus calculated are used
as a coarse estimate of the various link parameters. The experiment conducted produced
coarse estimate which are depicted in 8.3.

6.3
6.3.1

Non-Linear Optimization
Nelder Mead Approach

After an initial estimate is obtained, the link parameters are further processed using an
unconstrained numerical optimization technique. A random joint configuration is generated
using a simple random generator. The generated values are such that they lie within the
joint limits found above. The base point found is not altered. Transformation matrices are
calculated employing DH convention between the end effector and the base joint.
0

Tn =0 T11 T2 n1 Tn

(6.20)

The predicted end effector position is compared with the visually obtained data and the
error is minimized by modifying various parameters of the DH table by searching in the DH
space using an optimization technique called Nelder Mead Optimization2 [47]. Around 10
different random configurations are generated and the process is repeated. Each time the
41

DH parameters are modified to result in a refined DH table. Using the transformation matrix
above an estimate point based on the joint variables of the robot is calculated by:
Pestimate =n T0 Pbase

(6.21)

The observed world position of the robot end effector is found using stereo image processing described later in this paper and is denoted by Pobserved . The difference between the
estimated position and the observed position Pestimate Pobserved is minimized by modifying
the transformation matrix parameters of 0 Tn using Nelder-Mead optimization technique.
For n linked manipulator, there will be 4n DH parameters with one being the control variable of that link (joint angle for R joint, joint offset for P joint). Using the Nelder-Mead
scheme, a hyper dimensional surface of 3n + 1 vertices (each vertex has 3n dimensions) is
created and the minimum vertex value is computed. The minimum value a vertex produces
when substituted in 6.22 is the set of optimal DH parameters.
The algorithm used can be described in 6.3.1. Standard Values of , , , are 1,2 0.5 and
Algorithm 4 Nelder Mead Simplex Algorithm for refining solution
1: Compute an initial simplex on n+1 vertices (xi , i = 1 n+1) where n is the dimension
of search.
2: Order: order the n + 1 vertices to satisfy f (x1 ) 6 f (x2 ) 6 6 f (xn+1 )
x
3: Compute: x = ni=1 ni
4: Reflect: Compute the reflection point xr from xr = (1 + )x xn+1 , Evaluate f (xr ) if
f1 6 fr 6 fn xr is optimal, terminate iteration; else go to Step 6
5: Expand: if fr < f1 , calculate expansion point xe = (1 + )(x) xn+1 , Evaluate
f (xe ) if fe < fr xe is optimal; terminate else fr is optimal; terminate iteration.
6: Contract if fr > fn between x and the better of xn+1 and xr
7: 6.a. Contract inside if fr > fn+1 : xci = (1 )(x) + xn+1 Evaluate f (xci ) if f (xci ) <
f (xn+1 ) xci isoptimal;terminate, else Step 7
8: 6.b. Contract outside if fn 6 fr < fn+1 : xco = (1 + )x xn+1 ; Evaluate f (xco ) if
fc 6 fr xco is optimal; terminate the iteration, else Step 7
9: Shrink: Compute: vi = x1 + (xi x1 ), i = 2, n + 1 replace the simplex in Step 1 by
following simplex xi , v2 , , vn+1
10: Repeat the iteration
0.5 respectively [40]. In our case the above function f will be the following
f = error(Pestimate Pobserved )

(6.22)

Pestimate can be found by using equation 6.21 and the error can be computed as given in
equation 6.22.
Pestimate = [n Tn1 ][n1 Tn2 ] [2 T1 ][1 T0 ][Pbase ]
(6.23)
where, the input to the transformation matrices come from the experimental values obtained. And the joint values are input commands given to each actuator.
42

6.4
6.4.1

Simulation
Data generation

Real data is always corrupted by noise. So to check the performance of the algorithm two
things are to be checked. (i) The performance of the algorithm in presence of noise in
data and (ii) performance of the algorithm when only partial data is present. Partial data
means raw data that does not make a complete curve. The data was generated using normal
circular and elliptical curve parametric equations (6.24 and 6.25) with inclusion of random
normally distributed error. The variance of the normal distribution of error was chosen to
be 0.1 for 6.3b, 6.4a and 6.4b while the variance for data in figure 6.3a is 0.3. Figure 6.1
and 6.2 denotes the simulated data and the corresponding fits by the algorithm. The red
circular dots denote the simulated points and the continuous line denote the data fit by the
algorithm.
x = center X coordinate + r cos + guassian error
(6.24)
y = centre Y coordinate + r sin + guassian error
where, r denotes the radius of the circle.
x = center X coordinate + a cos + guassian error

(6.25)

y = centre Y coordinate + b sin + guassian error


where, a and b denote half of major and minor axis length of the ellipse.

6.4.2

Results of simulation

Following section examines the results of the plots generated by the simulated data and fit
curve. The red dots denote the actual data with error incorporated and the continuous curve
denote the fit found by the algorithms.
Circular Fit
The various circular fits obtained after giving simulated data to the algorithm is given in 6.1
and 6.2.
Elliptical Fit
The various circular fits obtained after giving simulated data to the algorithm is given in 6.3
and 6.3.

43

(a) Case I

(b) Case II

Figure 6.1: Simulated Circle Data

(a) Case III

(b) Case IV

Figure 6.2: Simulated Circle Data Continued

(a) Case I

(b) Case II

Figure 6.3: Simulated Ellipse Data

44

(a) Case III

(b) Case IV

Figure 6.4: Simulated Ellipse Data Continued

Figure 6.5: Comparison with Gradient Descent and LM method

45

6.4.3

Comments on simulation

The simulation results show that the algorithm works successfully for the simulated data.
All the curves converged in less than 200ms. Apart from the LM algorithm explained
the popular Gradient descent algorithm [62] was also tried to see the performance. The
simulation shows that LM method gives a superior performance to the gradient descent
method. Figure 6.5 illustrates this. Gradient descent took more time to converge and was
potentially vulnerable to local minima than the LM method in these simulations.

6.5

Summary

This section discusses the main principles behind statistical learning and few of the tools
that were made use in this thesis work. Non-linear regression analysis was performed for
curve fitting which was tested using some simulated data set. Also a numerical optimization
technique was used to refine the DH parameters. Nelder Mead Simplex algorithm was used
for the same.

46

Chapter 7
Experiment Setup
This chapter discusses various technical aspects of the hardware used in the experiment and
detailed steps required to carry out the experiment.

7.1

Hardware used

The main hardware used for the conduct of the experiment are the Invenscience LC ARM
2.0[1] and Point Grey Bumble Bee stereo camera [2].

7.1.1

The Invenscience LC ARM 2.0

Invencience LC ARM 2.0 is a robot manipulator with 6 DOF. Figure 7.1 depicts the manipulator used in the experiment. It has a base joint, shoulder joint, elbow joint, wrist joint and
a two finger end-effector. All these joints are revolute joints with the base, shoulder and
elbow joints have 1 DOF, the wrist joint has 2 DOF and the end effector has 1 DOF. Each
of the actuator in the manipulator is closed loop controlled (PID control) using a position
sensor (linear potentiometer) and a microcontroller. The master controller used is Pololu
Micro Maestro [3] which send PWM signals to all the slave controllers of the actuators.
The protocol used is the Pololu Compact Protocol which has several commands like
set position, get position, set speed, set acceleration, set PWM, get moving state and get
error. Out of these the main serial servo commands of our interest are the set position
command(7.1) and get position command(7.2).
< set positioncommand >, < DOFnumber >, < LSBo f target position >, < MSBo f target position >
(7.1)
An example would be [ 0x84, 0x02, 0x70, 0x2E ] which instructs the controller to set
position of actuator number two to 1.25ms pulse.
< set positioncommand >, < DOFnumber >
47

(7.2)

An example would be [ 0x93, 0x02 ] which instructs the controller to get the position of the
actuator number two.

Figure 7.1: The InvenscienceLC ARM 2.0

7.1.2

Stereo Camera

For observing the effect of robot motion on the environment, a stereo camera was used.
Stereo camera consists of two cameras on a rigid support separated by a distance with
lot of visual overlap between them. The camera on the right side is called the RHS and
the camera on the left is called the LHS camera. Using computational techniques, image
correspondences between the images formed on the RHS and LHS imagers we calculate
relative depth using the principle of triangulation[12].
The stereo camera used in this experiment is the Bumble Bee camera from Point Grey
International and is shown in figure 7.2. The camera has its own SDK and communicates
via FireWire port of the computer. The camera is prior calibrated and the values are accessible from the memory register of the camera using specific instruction from the SDK.

Figure 7.2: The Bumble Bee stereo camera from Point Grey International

48

7.1.3

Other Hardware

The power supply required for the manipulator was given from an SMPS rated at 12V at 6A.
Another modification that was done to the controller is to attach a Zigbee transceiver to it
so that the manipulator can be remotely controlled. The modem connected is configured as
a router and the master modem or the coordinator is attached to the computer that controls
the manipulator.

7.2

Detailed Steps of the Procedure

Figure 7.3: The complete experimental setup


7.3 depicts the experiment setup that is used to test the approach of this work. The stereo
camera is mounted on a stand and kept at a distance so as to see the entire manipulator. The
manipulator is controlled wirelessly by a Zigbee [4] module attached to its on-board controller. The end effector is observed using a stereo camera as the manipulator moves. Only
one joint is moved at a time keeping all other joints fixed. The entire task can be divided
49

into five different subtasks,


a. Obtain and process images from the stereo camera
The Bumble Bee 2.0 produces raw images which are to be removed from distortion and
rectify it.
b. Video Segmentation of the marker
This task comprises of tracking the spherical marker that is fixed on to the end effector. The
tracker is spherical since its shape is pose-invariant in the image plane [12].
c. Prediction of best fit curve on the raw data: Regression
The positions of the marker obtained in the subsequent frames are analyzed and a curve is
fit to it.
d. Calculation of DH-Parameters
After the best fit curve is found various manipulator parameters are derived from the curve.
The entire procedure is based on a simple intuition that the tip of the link attached by
an R-Joint or a P-joint (usually found in most manipulators) follows a circular (elliptical)
trace or a straight line. Initially in the software, the number of degrees of freedom present
in the manipulator is fed. Once it is given, the software instructs each of the joints to move
successively keeping other joints immobilized. The spherical marker attached to the end
effector of the robot manipulator is tracked. As the first joint turns, the software records the
images from both left and right camera at a rate of 0.1 Hz. The joint motor is actuated from
one extreme to the other extreme by incrementing the control signal to the particular degree
of freedom. In case of ARM 2.0 the control signals are in the form of PWM. Subsequently,
next joint is taken and incremental actuation signal is supplied with simultaneous recording
of stereo images keeping all other joints fixed. Using simple colour-based segmentation,
the marker is extracted out from the image and the centroid of the image is calculated. This
calculated centroid is taken as the centre of the marker and the pixel value is recorded. The
centroid is calculated by finding moments of the extracted marker image [22]. The obtained
values are plotted in a graph and regression analysis is used to find a best fit curve for this
data. The best fit curve is found by using the theory explained in the previous section.
Initially the base joint of the manipulator has to be determined. So the base joint is
actuated in its entire range and images are recorded. Since the later calibration of the
software depends on the accuracy of its base coordinate location, this experiment is repeated
at least 10 times to find the base coordinate. The final estimate is taken as the mean of the
10 experimental values obtained.

7.3

Design of Control Design Software

The control software designed consists of several modules. The overall view of the software
is shown in figure 7.4. The software scans for availability of bumble bee stereo camera
50

and if available would show the outputs of the same in two figure handles of the main
user interface. The software was able to scan the available serial ports in the system and
connect to it on request. Once calibrate button is clicked the software would start sending
commands to the manipulator one joint after the other. The software records images from
the left and right camera to a specified file location. The operation can be interrupted at
anytime by the user. A single iteration of the experiment consists of incremental motion
of each joints and capturing of images. The software also has another handle which shows
the model learned by the manipulator. It is displayed as the third figure handle in the main
GUI.

Figure 7.4: The control software interface

7.3.1

Software Architecture

The main modules inside the software can be represented according to figure 7.5. The
software architecture consists of 5 modules which were running on 4 different threads.
Two of the modules were running on a common thread since they are serial processes.
Main Interface
This is the module that contains the graphics module which interacts with the user. It
contains the GUI resources like the button control, picture box control, edit box control etc.
Serial Port Module
Serial port communication is widely used in industries, commercial applications and academic research. Figure 7.6 depicts the configuration of serial port module.

51

Figure 7.5: Control Software Architecture


Stereo Camera Module
The stereo camera module is based on the SDK provided by Point Grey Inc. called the
Flycapture and Triclops. These SDK contains inbuilt algorithm wrappers for performing
various stereo camera operations. This module can be depicted by the figure 7.7.
Image Processing Module
This module is designed to handle Image processing operations like segmentation, obtaining trace of end effector etc. This module is optional since it was incorporated into the
analysis tool box designed in MATLAB which is discussed later.
OpenGL Module
This module works on an independent thread and is responsible for creation of the link
model of the manipulator as the software learns various parameters of the manipulator. The
52

Figure 7.6: Serial Port Configuration [9]

Figure 7.7: Stereo Camera Module


structure of this module is explained by figure 7.8

Figure 7.8: The OpenGL Module handling the creation of link model of manipulator

7.4

Design of Analysis Tool Box

Analysis toolbox was designed in MATLAB due to the ease of programming compared to
the C++ language. The analysis toolbox designed took the stereo images produced by the
control software, analysed it and produce results, plots and a text file containing the results.
53

Parameters like link length, link angle and range of actuator limits are also calculated by
this tool box. The tool box is as shown in figure 7.9. Various modules of this tool box are

Figure 7.9: The Analysis toolbox designed for deducing information from raw data produced by the manipulator
Finding Trace module, Curve Fitting module and Text file generating module. The Finding
Trace module segments in the image and uses morphological operations (refer 5.2.1) to
find the trace or the end effecter of the manipulator. The algorithms used in this section
is explained in detail in previous sections (refer r6.2). The Curve Fitting module finds the
best fitting curve for the data points. Based on the data, this module fits either a circle or an
ellipse on the available data points. The best fit curve is found by comparing the least square
error difference between the fit curve and the raw data points. The lowest least square error
value produced will have the best fit curve. Based on the curve fit, various parameters like
the actuator limits and link angle is found. The link lengths are found by finding the 3D
world coordinates of the actuator positions by making use of stereo triangulation principles
discussed in section 5.1.

7.5

Summary

The hardware selected for robotic manipulator and its interface is discussed in detail. The
details regarding the mathematical modelling of a manipulator can be read in Appendix
A. A manipulator of invenscience make was selected based on the requirements. A circuit
was made to wirelessly communicate with the manipulator from the PC. The stereo camera selected is one of the commonly used stereo rig in research. Two different software
were designed to derive the model of the manipulator and analyse the data produced in the
54

experiment. The first software was made in Visual Studio 2010 using C++and the analysis software was made MATLAB. The results produced by the experiment and the model
derived are given in the next chapter.

55

Chapter 8
Results and Discussion
8.1

Results

Base joint coordinate was determined using curve fitting algorithm. The results obtained in
10 experiments are recorded in table 8.1. All distances are measured in meters.
Experiment X Coordinate Y Coordinate Z Coordinate
1
1.102963
1.260277
2.225095
2
1.102981
1.260278
2.225091
3
1.102960
1.260290
2.225077
4
1.102974
1.260281
2.225086
5
1.102971
1.260263
2.225083
6
1.102964
1.260286
2.225088
7
1.102977
1.260291
2.225086
8
1.102988
1.260271
2.225071
9
1.102974
1.260295
2.225093
10
1.102989
1.260269
2.225072
Mean
1.1029741
1.2602801
2.2250842
Table 8.1: Table showing coordinate locations of base joint
In short Pbase = [1.1029741, 1.2602801, 2.2250842]T .
After the calibration by the control software, the screenshot of the software is as shown in
figure 8.1. The results of the analysis software after curve fitting and parameter identification is shown in figures 8.2, 8.3, 8.4 and 8.5. The red coloured dots indicate the raw data
that is the trace obtained of the marker at the end effector. The blue curve denotes the curve
traced by the curve fitting algorithm. And the yellow dot represents the centre calculated.
Various link parameters calculated from the analysis is summarised in table 8.2, 8.3 and
8.4. Various stages of analysis software is depicted in figures from 8.6 to 8.10.

56

Figure 8.1: Screenshot of user interface software after calibration

(a) DOF 1 Left Camera

(b) DOF 1 Right Camera

Figure 8.2: Curve fitting result of DOF 1

8.1.1

Validation of model

The various link parameters obtained for the ARM 2.0 were tested on the manipulator
using simple inverse kinematics. De-coupling technique described in Chapter 2 was used
to check the model. Only inverse position of the manipulator was calculated. The values
of the joint parameters of first three joints 1 , 2 and 3 are calculated using equation A.14.
Using the DH table obtained the model was validated by making the manipulator reach
a target specified by a red marker attached to a rod at random configurations. The red
coloured rod was identified by using simple segmentation and its 3D location is computed
using stereo vision principles.The [dx , dy , dz ]T of equation A.13 is supplied with the world
coordinate derived from the stereo camera data. The joint variables obtained were given
back to the manipulator to reach the target position. The results obtained were satisfactory.
57

(a) DOF 2 Left Camera

(b) DOF 2 Right Camera

Figure 8.3: Curve fitting result of DOF 2

(a) DOF 3 Left Camera

(b) DOF 3 Right Camera

Figure 8.4: Curve fitting result of DOF 3


Two screen shots of such random configurations tested are shown in figure 8.11 and 8.12
. The red coloured box in the screen shot indicates the marker location identified by the
control software.

8.2

Discussions

The work started with the notion of giving a robot system the ability to learn its own model
so that it can re-model itself in case of minor damages or unavailability of a human programmer to recalibrate the system. The robot system chosen for this thesis work is a robot
manipulator of which the model is unknown. The intention was to obtain the model of the
manipulator which could be readjusted with time so as to get a better coordination between
the camera and the manipulator. The work is greatly influenced by the ability of human
beings to derive a perfect relationship between their eyes and hands so that the hands reach

58

(a) DOF 4 Left Camera

(b) DOF 4 Right Camera

Figure 8.5: Curve fitting result of DOF 4


DOF
1
2
3
4

Link Length
(OEM)
0.2600
0.5650
0.5150
0.0650

Link Length
(AM)
0.2741
0.5804
0.5215
0.0707

Link Length
(EMM)
0.2691
0.5774
0.5195
0.0700

Percentage
Deviation
3.5%
2.1%
0.87%
7.6%

Table 8.2: Table showing link lengths learned after analysis


the location where the eyes sees. The literature survey dealt with some current approaches
and methods that are used in various robotic systems to obtain self model as well as some
techniques to improve eye-hand coordination.
The solution described by this work is highly intuitive based. The work was based on
few element of statistical learning like regression and some model optimization techniques.
A control software was designed and implemented which was supplied with a few basic
information like the number of joints and type of joints present in the manipulator. The
software observed the action-reaction of the manipulator using visual feedback so as to
derive various manipulator parameters. These parameters were used to create the DH parameterized model of the manipulator. The model obtained was validated by using it to find
inverse position joint parameters which gave satisfactory results.
These results show that the solution proposed is valid and can be further refined to
obtain better model of the manipulator. However the orientation of the end effector is yet
to be done.

59

Figure 8.6: Analysis ToolBox Image Segmentation

DOF
1
2
3
4

Twist Angle
(AM)
1
1
1
1

Twist Angle
(EMM)
88.99
0.001
0.001
0.001

Min Angle

Max Angle

-5.1975
37.0735
105.6874
-17.3440

-177.1138
-25.9647
2.7144
51.4407

Table 8.3: Table showing various joint angles learned after analysis

DOF
1
2
3
4

Link Offset
(AM)
1
1
1
1

Link Offset
(EMM)
0.004
0.003
0.001

Table 8.4: Table showing link offsets learned after analysis

60

Figure 8.7: Analysis ToolBox DOF 1 Result

Figure 8.8: Analysis ToolBox DOF 2 Result

61

Figure 8.9: Analysis ToolBox DOF 3 Result

Figure 8.10: Analysis Tool Box DOF 4 Result

62

(a) Marker at random configuration 1

(b) Manipulator reaching the configuration 1

Figure 8.11: Validation of the model learned:Trial 1

63

(a) Marker at random configuration 2

(b) Manipulator reaching the configuration 2

Figure 8.12: Validation of the model learned:Trial 2

64

Chapter 9
Conclusion and Future Work
9.1

Conclusion

The thesis proposed the need to address a problem that lies in the grey area of robotic
research. Though much literature exist for making the robot model the environment exist,
most of them does not address the problem of self modelling. The main objective of the
thesis was to make a robot capable of self modelling so as to have a perfect eye-hand
coordination to do various tasks. The literature survey was done for two objectives, one for
the concept of self modelling and other for the concept of eye-hand coordination. Various
existing and proposed methods of the same are surveyed and discussed.
The proposed method for solving the problem was by observing the end effector position of the robotic manipulator using a stereo camera and uses various tools of statistical
data analysis to infer data from the same. The data analysed through various non-linear regression methods so as to derive information about each joints from the trace of end effector of the manipulator. The information derived includes the link length of the manipulator,
minimum and maximum angles of the manipulator and various DH parameters to represent the manipulator kinematically. The main fundamental principles used are discussed in
detail.
The proposed solution is then experimented on a robotic manipulator of Invenscience
LC make and using a stereo camera. The details of the experiment can be read in Chapter
7. The Kinematic parameters obtained from the experiment are validated though a simple
experiment. The validation uses only inverse position checking of the manipulator and not
the inverse orientation. The inverse orientation can be done as a part of future work.
The capability of self modelling will enable the robots to be more robust and resilient.
Such a capability would augment the need to send manipulators to remote planets where
a human intervened re-calibration is either not possible or expensive. The results section
shows that this approach can be used for black box model learning in robotic manipulators.
The model obtained after analysis matches with the data sheet values and fall within an

65

acceptable tolerance value. The simple target reaching experiments done using the model
obtained produced appreciable results. If the robotic system while under operation suffers
from minor change which may lead to change in its model, then this approach enables the
system to re-adjust the model by itself and can offer reasonably good operational performance.

9.2

Future Work and Scope

The obtained model of the manipulator is found to be approximated and can be refined using additional sensory measurements together with higher number of trials.
By using additional encoders a more accurate reading of the joint actuators can be
obtained.
The manipulator experimented had only revolute joints present so the approach needs
to be tested on manipulators with other kinds of joints like prismatic, cylindrical or
spherical.
The model obtained needs to be tested with other manipulators to make a more generalized approach as well as validation of this proposed solution. This included testing of this method in manipulators of different make and different types of joints.
Modelling of a sperical joint using this method needs to studied and implemented.
Though a spherical joint is a combination of three orthogonal revolute joints, the
method needs to be tested to prove the validity.
The model obtained can be tested for doing some complex tasks like pick and place
and complex trajectory generation. This will prove the robustness and resilience of
the model. By generating complex trajectories for the model obtained, it can prove
that this method could be used in industry for automation.
At present the modelling is obtained by keeping a stereo camera at a distance, in
future the camera will be equipped on the same platform as the manipulator so as
to derive its model. For example, configuration like a humanoid robot where the
visual system is on the side of manipulator. A more generalized approach can enable
re-modelling of a robot equipped with stereo camera to look to its manipulator and
obtain its model.
The manipulator model learned used DH convention for finding the transformation
matrices between the frames attached to each joint. Instead of using DH convention
the method of POE (power of exponents) [52] should also be checked for deriving
the model. Power of exponents uses only two frames; the global frame and the end

66

effector frame. The transformation matrix is derived on the basis of Chasles-Mozi


theorem[13].

67

Appendix A
Robotic Manipulators
Robotic manipulators are open kinematic chains composed of links and joints. However
robot manipulator as a system consists of several other components like end-effector, actuators, sensors, controllers, processors and software[35]. Robot manipulators are of two
types, (i) serial robot manipulators and (ii) parallel robot manipulators and (iii) hybrid manipulators which are also called serial-parallel manipulators. Serial robot manipulators can
be distinguished by serial connected links and joints between the base and the end-effector.
Figure A.1 is an example of this. Parallel robot manipulators are can be distinguished

Figure A.1: An example of a Serial Manipulator [19]


usually by a base link and moving link connected each other by a set of parallel serially
connected links. Figure A.2 is an example of this []. A hybrid manipulator is set of serially
connected parallel links which can be seen from figure A.3.
Another classification of manipulators is based on the number of DOF it possesses. A
point in space has 6 DOF, that is it has three position coordinates and three orientation
coordinates. Hence a 6 DOF manipulator is sufficient enough to reach any point in its 3D
68

Figure A.2: An example of a Parallel Manipulator[33]

Figure A.3: An example of a Serial-Parallel Manipulator[33]


workspace. Such a manipulator is called a non-redundant manipulator and a manipulator
with excess of DOF is called a redundant manipulator. The advantage of a redundant manipulator is that there exist multiple postures of the manipulator to reach the points in its 3D
workspace. This also increases the complexity of computing. Usual industrial manipulators
are of 6 DOF [16].
Some of the important terminologies are defined in the following section.
Links: These are the individual rigid bodies (assumption) that make up the manipulator.
In a manipulator, a link is a member that may have relative motion with respect to all other
member of the manipulator. Two or more links that may not have relative motion between
each other is considered as a single link from kinematic point of view.
Joints: These are the junctions by which two links are connected which allow the relative motion between the links. At joints the relative motion between the links can be defined

69

using a single coordinate. There are different kinds of joints based on the kind of relative
motion they allow between the adjacent links. Some of them worth mentioning are prismatic joints (1 DOF), revolute joints (1 DOF), universal joints (2 DOF), cylindrical joints
(2 DOF) and spherical joints(3 DOF). Some of them can be seen from the above figures of
manipulators. Out of the various joints defined, the most basic ones are the revolute joints
and the prismatic joints. The revolute joint allows rotation of one link with respect to the
other connected link about an axis. This axis is called the axis of the revolute joint. The
amount of angle rotated is called the joint variable of the revolute joint. The prismatic joint
allows translation of one link with respect to the other connected link. The line along which
the translation takes place is denoted as the axis of the prismatic joint. The distance moved
along the axis is called the joint variable of the prismatic joint.
Actuators: These are the drivers connected to each joint axis that enables the change
of the joint variable. This can be electric motors, pneumatic or hydraulic cylinders. These
actuators are controllable which enables the predictable motion of the manipulator. The
actuators provide capability to the joints to provide relative motion between the links against
gravity, inertia or other external forces. A joint to which an actuator is connected is called
an active joint and the joints with no joint actuators are called inactive or free joint.
Sensors: These are elements that detect and collect information about the internal and
external state of the robot manipulator. Sensors measure the actual effect of a control signal
to an actuator. This gives the possibility to use feedback control in the manipulator. The
internal sensors used in robotic manipulators include position sensors like encoders, linear
potentiometers etc. and external sensors like camera.
Controller: The controller of a robotic manipulator is responsible for collecting information from the sensors and taking appropriate decision so as to minimize the error between
the set point value and actual attained position value.

A.0.1

Important Definitions

With regard to a robotic manipulator, some important definitions [20] are to be defined that
will be encountered by the reader in this thesis.
Cartesian Space: The space spanned by the Cartesian coordinate system in which a
point in space is represented by an ordered pair consisting of signed distances along three
mutually perpendicular axis called the coordinate axis. The signed distances along the coordinate axis taken are the perpendicular projections of the point onto each coordinate axis.
The coordinate axis is usually denoted by unit vectors x,
y,
z
Task Space: Task space of a robot is defined as the volume of the Cartesian space which
can be addressed by the end effecter of the robotic arm to do useful work. It is also called
70

Figure A.4: The Denavit Hartenberg Convention


the work space, work envelope or work cell
Joint Space: Joint space is defined as the part of the coordinate space where the joints of
the robots can move freely. In other words in it is the span of each joints of the manipulator.
The robot is usually controlled in joint space and tasks are performed in work space [57].

A.0.2

Denavit Hartenberg Notation

The kinematic information of robot components of each link i is described by rigidly attaching a coordinate frame to each joint i+1[35]. The standard method of Denavit- Hartenberg
(DH) parameterization is used for the same. By applying the DH convention, four parameters characterize each coordinate frame. The link length which is the distance between zi1
and zi ,the link twist which is the amount of rotation about xi to make zi1 parallel to zi ,
the joint offset which is the length between xi1 and xi along zi1 and the joint angle i ,
the required angle about zi1 to make xi1 parallel to xi . The convention used is shown in
figure A.4.

A.0.3

3D Transformations and Forward Kinematics

3D transformation is quite a fascinating topic. It gives the mathematical support for transforming a rigid body in 3D space. Transformations applied to a body are given in point
71

sets. A rigid body transformation is assumed to preserve all the relative distances between
the object particles. These geometric transformations can be defined as mappings of one
coordinate system to itself.
Rigid body transformations are affine in
nature and these transformations preserve
points, lines and planes. Though parallel
lines remain in affine transformation, angles need not be preserved. Affine transformations include translation, scaling, homothety, similarity transformation, reflection,
rotation, shear mapping, and compositions. Figure A.5: Geometric transformations [49]
Some basic concepts of these transformations that are limited to affine mappings are discussed here.
A position vector p in 3D space is denoted by a column vector p = [x1 , y1 , z1 ]T . Collinear
points on the vector can be reached by scaling up p by a scalar and is represented using 4
parameters p = [x1 , y1 , z1 , s]. This representation is called homogenous transformation. A
rigid body is usually represented by a single vector to its centroid. Transformations to
the rigid body are mathematically represented by matrix multiplication. A transformation
matrix is represented as A.7.

r11 r12 r13 dx

r21 r22 r23 dy

(A.1)
r

31 r32 r33 dz
px py pz s
Components from r11 to r33 denote rotation components dx , dy , dz denote translation along
x, y and z axis, px , py , pz denote projective components along x, y and z axis and s is the
scale factor. Transformations always require a reference frame. Motion of a rigid body is
described with respect to a coordinate frame. There are two coordinate frames described
in case of rigid body transformations. They are the global coordinate frame and the local
coordinate frame. Since a robot has multiple moving parts (links in case of manipulators)
for the ease of description coordinate frame is attached to each of the links separately. This
helps in describing motion of a link with respect to the global frame as well as description of
relative motion between the links. For example figure A.4 denotes separate frames attached
to adjacent links of a manipulator. The relative motion between the frames (links) are
defined using transformation matrices from one frame to the other and motion of each link
with respect to the global frame is described using transformation matrices between the
local link frame and the global frame. Consider the vector p described in a local coordinate
frame. The transformation of p to the global coordinate frame is given by G p =G RB B
p +G dB where G dB is the position vector of the local frame defined in the global coordinate
72

Figure A.6: Transformation from local coordinate frame to global coordinate frame
system and G RB is the rotation matrix of the body coordinate frame to the global frame
which is a function of (angle between the local coordinate x axis and global coordinate
x axis), (angle between the local coordinate y axis and global coordinate y axis) and
(angle between the local coordinate z axis and global coordinate z axis).
Kinematics is defined as the science of geometry of motion [35] and represents motion
in terms of position, orientation and their time derivatives. In robotics kinematic description
of the system is used to set up fundamental equations for dynamics and control. This
thesis concentrates on robotic manipulators and their kinematic description. Kinematic
description of a manipulator is derived using the principles of 3D transformations and DH
parameterization of coordinate frames. It is shown in figure A.7. Based on the description
the DH table for the manipulator shown in A.7 can be given as. The Matrix in A.1 refer
Matrix i1
T1 (1 )
0
T2 (2 ) /2
T3 (3 )
0
T4 (4 ) /2
T5 (5 ) /2
T6 (6 ) /2

ai1
0
0
a2
a3
0
0

i
1
2
3
4
5
6

di
0
d2
d3
d4
0
0

Table A.1: DH Parameters of Puma 560 manipulator


to the transformation matrices that are required in each coordinate frame to align with
the previous frame. Transformation matrices between different coordinate frames can be

73

Figure A.7: A Puma 560 manipulator with coordinate frames [41]


written using the transformation matrices explained above.

cos 1 sin 1

sin 1 cos 1
0
T1 =
0
0

0
0

0
0
1
0

cos 2 sin 2 0

0
0
1
1
T2 =
sin cos 0
2
2

0
0
0

cos 3 sin 3 0

sin 3 cos 3 0
2
T3 =
0
0
1

0
0
0

cos 4 sin 4 0

0
0
1
3
T4 =
sin
cos 1
0
1

0
0
0

74

(A.2)

d2

a2

d3

a3

d4

(A.3)

(A.4)

(A.5)

cos 5 sin 5 0

0
0
1
4
T5 =
sin cos 0
5
5

0
0
0

cos 6 sin 6 0

0
0
1
5
T6 =
sin
cos 6
0
6

0
0
0

(A.6)

(A.7)

A point in [x, y, z, 1]T in the body frame of last link (frame 6) appear in global frame (frame
0) as equation A.8.
[0 T1 ][1 T2 ][2 T3 ][3 T4 ][4 T5 ][5 T6 ][x, y, z, 1]T
(A.8)
To sum it up forward kinematics of a robot manipulator is defined as obtaining the end
effector position by using a set of specified values for the joint variables.

A.0.4

Inverse Kinematics

Determination of joint variables in terms of end-effector position and orientation is known


as inverse kinematics. In other words,
0 T = [0 T (q )][1 T (q )] [n1 T (q )]
n
n n
1 1
2 2
represents the forward transformation of the end-effector position using the joint variables
q1 , q2 , , qn and inverse kinematics deals with find the vector q given the end-efffector position. Usually the end-effector position is specified in world coordinates. Since the forward
transformation matrix consists of trigonometric identities, scope of multiple solutions exists and hence there are several methods to find the inverse kinematics solutions to a robotic
manipulator. Some of them are decoupling technique, inverse transformation technique,
iteration technique etc. The scope of this work only aims at an elementary understanding
of various inverse kinematics techniques. In this regard only the decoupling technique is
discussed.
The result of forward kinematics of the manipulator discussed in the previous section
can be written as in equation A.9.
0T
6

= [0 T1 ][1 T2 ][2 T3 ][3 T4 ][4 T5 ][5 T6 ]

t11 t12 t13 t14

t21 t22 t23 t24

=
t

31 t32 t33 t34


0 0 0 1

75

(A.9)

where 12 elements (ti j i, j = 1, , 4) are trigonometric functions containing 6 joint variables. The upper left 3x3 matrix is the rotation matrix and its orthogonal nature makes only
3 components independent. Hence in the total forward transformation technique only 6
equations out of 12 are independent. In decoupling technique, the inverse kinematic problem is divided into inverse position and inverse orientation problems which are independent
of each other. Overall transformation matrix of the robot can be decomposed as in equation
A.10.
0 T = [0 D ][0 R ]
6
6
6
"

I
=
0
"
=

#
#"
0R6 0
0 1
1

0d
6

0R
6

0d

(A.10)

#
6

The translation matrix 0 D6 can be solved for the end-effector position and the rotation
matrix 0 R6 can be solved for the orientation variables. Equations from 0 T 1 to 2 T3 are
combined to get 0 T3 and equations 3 T 4 to 5 T6 are combined to get 3 T6 (refer: A.11 and
A.12).

cos 1 cos (2 + 3 ) sin 1 cos 1 sin (2 + 3 ) l2 cos 1 cos 2 + d2 sin 1

sin 1 cos (2 + 3 ) cos 1 sin 1 sin (2 + 3 ) l2 cos 2 sin 1 + d2 cos 1


0

T3 =
sin ( + )

0
cos (1 + 3 )
l2 sin 2
1
3

0
0
0
1
(A.11)

cos 4 cos 5 cos 6 sin 4 sin 6 cos 6 sin 4 cos 4 cos 5 sin 6 cos 4 sin 5

cos 5 cos 6 sin 4 + cos 4 sin 6 cos 4 cos 6 cos 5 sin 4 sin 6 sin 4 sin 5
3
T6 =

cos 6 sin 5
sin 5 sin 6
cos 5

0
0
0
(A.12)
Solution to the inverse kinematics problem starts with the wrist position vector d, which is
[t14 ,t24 ,t34 ]T of 0 T6 for d6 = 0. Thus the inverse position can be solved by A.13.


cos 1 (l3 sin (2 + 3 ) + l2 cos 2 ) + d2 sin 1
dx


d = sin 1 (l3 sin (2 + 3 ) + l2 cos 2 ) d2 cos 1 = dy
l3 cos (2 + 3 ) + l2 sin (2 )
dz

(A.13)

where dx , dy and dz are the 3D world point to be reached by the end-effector. By applying
various algebraic identities the equation A.13 can be solved to obtain the joint parameters

76

l3

1 .2 and 3 and this will give the position of the end effector in 3D space.
q
dx dx2 +dy2 d22
1 = 2 arctan
d2 dy
c

arctan ab

2 = arctan

r
r2 c2

3 = arcsin

dx2 +dy2 +dz2 d22 l22 l32


2l2 l3

(A.14)

22

where,
a = 2l2

q
dx2 + dy2 d22

b = 2l2 dz
c = dx2 + dy2 + dz2 d22 + l22 l32
r2 = a2 + b2
Similarly using the rotation matrix of the 3 T6 the orientation parameters can be found. The
rotation matrix of 3 T6 can be written as shown in A.15.

cos 4 cos 5 cos 6 sin 4 sin 6 cos 6 sin 4 cos 4 cos 5 sin 6 cos 4 sin 5

3R =
cos 5 cos 6 sin 4 + cos 4 sin 6 cos 4 cos 6 cos 5 sin 4 sin 6 sin 4 sin 5
6
cos 6 sin 5
sin 5 sin 6
cos 5

o11 o12 o13

= o21 o22 o23


o31 o32 o33
(A.15)
The angles 4 , 5 and 6 can be found by equation A.16.
4 = arctan oo23
13 2 2
0 +o
5 = arctan o1333 23
o32
6 = arctan o
31

77

(A.16)

Appendix B
Software Tools Used
There are a variety of software packages that were used to create the control software of
the robot manipulator and other analysis. This section discusses about the main software
tools and libraries that were used. Main control software was designed in Microsoft Visual
Studio 2010 development environment. The main libraries used include Eigen, Triclops
SDK, FlyCapture SDK, OpenGL and OpenCV. A part of the analysis was created using
MATLAB 8.1a.

B.0.5

Platforms

Microsoft Visual Studio:


This is a integrated development environment (IDE) that supports managed C,C++, MFC,
C#, VB, F# etc. It provides environment to develop Microsoft Windows software, web site
and other applications like GUI design. Visual studio also includes a code editor supporting
intelliSense as well as code refactoring.The integrated debugger works both as a sourcelevel debugger and machine level debugger [53].
MATLAB:
It is a multi-paradigm numerical computing environment using fourth generation programming language developed by MathWorks. The software allows matrix manipulations, plotting functions, user interface design and also interface with other programming languages
like C++, Java and Fortran. It has a lot of tool boxes which aid in easy programming and
faster method to check the correctness of your logic. The programming language is easy to
follow and debug.

78

B.0.6

Libraries

Libraries are files that are created for a specific hardware or application that contains a
collection of implementations of behaviour written in terms of a programming language
to ease programming. These behaviours are invoked using specific syntaxes defined in the
library. Libraries facilitate code reusability. There are several libraries that were used in
this work. Some of them are OpenCV for image processing, OpenGL fro graphics, Triclops
and FlyCapture SDK for stereo vision and Eigen for linear algebra calculations.
OpenCV
OpenCV or open computer vision is a library written in optimized C and C++ for faster
image processing calculations. The library contains an array of operations and ready to use
algorithm implementations. The library was first created by Intel and now maintained by
WilliGrow- an open source community [12].
OpenGL
OpenGL or open graphics library provides ease of graphics programming in .NET and
UNIX environment. This API is typically used to interact with the GPU to achieve hardwareaccelerated rendering. This was first developed by Silicon Graphics and presently maintained by consortium Khronos Group [63].
Triclops SDK and FlyCapture SDK
These are the libraries that are written for stereo vision processing for Point Grey imaging
devices. These SDK is required to communicate with Point Grey products so as to collect
image data. These libraries provide a wide range of functions for stereo image manipulation. Algorithms for camera calibration, image un-distrortion, image alignment, image
correspondence, triangulation etc are inbuilt.
Eigen
Eigen is a high level C++ library of template headers for linear algebra, matrix and vector
operations, numerical solvers and algorithms. It is a elegant API and known for its versatile
fixed and dynamic matrix capabilities [34].

79

Bibliography
[1] Operatorr Manual Advances Robotic Manipulator 2.0 (ARM 2.0).
[2] PGR FlyCapture User Manual and API Reference.
R
R
[3] Servo Controller /MicroMaestro
Maestro
Controller Manual.
R
R
[4] XBee /XBee-PRO
RF
Modules Product Manual.

[5] Erico Guizzo & Evan Ackerman.


How Robonauts Compliant Arms
Work.
http://spectrum.ieee.org/automaton/robotics/humanoids/
how-robonaut-arms-work, 2012.
[6] Gossaye Mekonnen Alemu, Sanjeev Kumar, and Pushparaj M Pathak. Self-calibration
of a camera equipped scorbot er-4u robot.
[7] Peter K Allen, Aleksandar Timcenko, Billibon Yoshimi, and Paul Michelman. Automated tracking and grasping of a moving object with a robotic hand-eye system.
Robotics and Automation, IEEE Transactions on, 9(2):152165, 1993.
[8] Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. An
introduction to mcmc for machine learning. Machine learning, 50(1-2):543, 2003.
[9] Ying Bai. The Windows serial port programming handbook. CRC Press, 2004.
[10] Josh Bongard, Victor Zykov, and Hod Lipson. Resilient machines through continuous
self-modeling. Science, 314(5802):11181121, 2006.
[11] Jean-Yves Bouguet. Camera Calibration Toolbox for Matlab. http://www.vision.
caltech.edu/bouguetj/calib_doc/htmls/parameters.html, 2013.
[12] Gary Bradski and Adrian Kaehler. Learning OpenCV: Computer vision with the
OpenCV library. OReilly Media, Inc., 2008.
[13] Roger W Brockett. Robotic manipulators and the product of exponentials formula. In
Mathematical Theory of Networks and Systems, pages 120129. Springer, 1984.

80

[14] Sheffield Hallam University Centre for Sports Engineering Research. Depth Biomechanics. http://www.depthbiomechanics.co.uk/?p=102/, 2014.
[15] Nikolai Chernov. Circular and linear regression: Fitting circles and lines by least
squares. CRC Press/Taylor & Francis, 2011.
[16] John J Craig. Introduction to robotics, volume 7. Addison-Wesley Reading, MA,
1989.
[17] Anthony Dearden and Yiannis Demiris. Learning forward models for robots. In IJCAI,
volume 5, page 1440, 2005.
[18] Lu Dunmin, Gao Daoxiang, Tian Ye, and Xiao Aiping. Self-calibration for a welding
robot based on kinematics and matlab software. In Computer Science and Automation
Engineering (CSAE), 2011 IEEE International Conference on, volume 3, pages 691
694. IEEE, 2011.
[19] Yuefa Fang and Lung-Wen Tsai. Feasible motion solutions for serial manipulators at
singular configurations. Journal of Mechanical Design, 125(1):6169, 2003.
[20] King Sun Fu, Ralph Gonzalez, and CS George Lee. Robotics: Control Sensing Vision
and Intelligence. Tata McGraw-Hill Education, 1987.
[21] David Edward Goldberg et al. Genetic algorithms in search, optimization, and machine learning, volume 412. Addison-wesley Reading Menlo Park, 1989.
[22] Rafael C Gonzalez, Richard Eugene Woods, and Steven L Eddins. Digital image
processing using MATLAB. Pearson Education India, 2004.
[23] MATLAB Users Guide. The mathworks. Inc., Natick, MA, 5, 1998.
[24] Gregory D Hager, Wen-Chung Chang, and A Stephen Morse. Robot hand-eye coordination based on stereo vision. Control Systems, IEEE, 15(1):3039, 1995.
[25] Radim Halr and Jan Flusser. Numerically stable direct least squares fitting of ellipses.
In Proc. 6th International Conference in Central Europe on Computer Graphics and
Visualization. WSCG, volume 98, pages 125132. Citeseer, 1998.
[26] Justin W Hart and Brian Scassellati. Robotic self-models inspired by human development. In Metacognition for Robust Social Systems, 2010.
[27] Justin W Hart and Brian Scassellati. A robotic model of the ecological self. In
Humanoid Robots (Humanoids), 2011 11th IEEE-RAS International Conference on,
pages 682688. IEEE, 2011.

81

[28] Inman Harvey, Philip Husbands, Dave Cliff, et al. Issues in evolutionary robotics.
School of Cognitive and Computing Sciences, University of Sussex, 1992.
[29] Trevor Hastie, Robert Tibshirani, Jerome Friedman, T Hastie, J Friedman, and R Tibshirani. The elements of statistical learning, volume 2. Springer, 2009.
[30] Janne Heikkila and Olli Silven. A four-step camera calibration procedure with implicit
image correction. In Computer Vision and Pattern Recognition, 1997. Proceedings.,
1997 IEEE Computer Society Conference on, pages 11061112. IEEE, 1997.
[31] Nicholas Hollinghurst and Roberto Cipolla. Uncalibrated stereo hand-eye coordination. Image and vision Computing, 12(3):187192, 1994.
[32] http://eyehandcoordination.com/.
eyehandcoordination.com.
depthbiomechanics.co.uk/?p=102/.

http://www.

[33] Jin Huang, Ye-Hwa Chen, and Zhihua Zhong. Udwadia-kalaba approach for parallel manipulator dynamics. Journal of Dynamic Systems, Measurement, and Control,
135(6):061003, 2013.
[34] Eigen Inc. Eigen Template Library. http://eigen.tuxfamily.org/index.php?
title=Main_Page/, 2014.
[35] Reza N Jazar. Theory of applied robotics. Springer, 2010.
[36] George J Klir and Bo Yuan. Fuzzy sets and fuzzy logic, volume 4. Prentice Hall New
Jersey, 1995.
[37] John R Koza. Genetic programming. 1992.
[38] Danica Kragic.
Robotic Visual Servoing and Eye-Hand Coordination.
http://www.robot.uji.es/EURON/visualservoing/summerschool/works/
Kragic/slides.pdf, 2012.
[39] Sukir S Kumaresan and Hua Harry Li. Hand-eye coordination of a robot manipulator based on fuzzy logic. In Image Processing, 1994. Proceedings. ICIP-94., IEEE
International Conference, volume 3, pages 221225. IEEE, 1994.
[40] Jeffrey C Lagarias, James A Reeds, Margaret H Wright, and Paul E Wright. Convergence properties of the neldermead simplex method in low dimensions. SIAM
Journal on Optimization, 9(1):112147, 1998.
[41] Steven M Lavalle. Planning Algorithms.
node111.html, 2006.

82

http://planning.cs.uiuc.edu/

[42] Jurgen Leitner, Simon Harding, Mikhail Frank, A Forster, and Jurgen Schmidhuber.
Transferring spatial perception between robots operating in a shared workspace. In
Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on,
pages 15071512. IEEE, 2012.
[43] Dunmin Lu, Chuanqing Zhang, Yuezhen Fan, Xiangdong Yang, and Ken Chen. The
kinematic self-calibration method and simulation of one palletizing robot. In Intelligent Control and Automation (WCICA), 2010 8th World Congress on, pages 6424
6427. IEEE, 2010.
[44] Jean-Arcady Meyer, Phil Husbands, and Inman Harvey. Evolutionary robotics: A
survey of applications and problems. In Evolutionary Robotics, pages 121. Springer,
1998.
[45] Shabbir Kurbanhusen Mustafa, Guilin Yang, Song Huat Yeo, and Wei Lin. Kinematic
calibration of a 7-dof self-calibrated modular cable-driven robotic arm. In Robotics
and Automation, 2008. ICRA 2008. IEEE International Conference on, pages 1288
1293. IEEE, 2008.
[46] Duy Nguyen-Tuong and Jan Peters. Model learning for robot control: a survey. Cognitive processing, 12(4):319340, 2011.
[47] Kevin Nickels and Kenneth Baker. Hand-eye calibration for robonaut. NASA Summer
Faculty Fellowship Program, 2003.
[48] Stefano Nolfi and Dario Floreano. Evolutionary robotics, volume 150. MIT press
Cambridge, 2000.
[49] pablo.garcia morato. Image Tracking in Augmented Reality. The translation between
the 2D and the 3D world. http://www.arlab.com/blog/wp-content/uploads/
2012/07/geometric-transformations.bmp, July 8th, 2012.
[50] pahm. Infant Play Dough Play. http://www.playathomemomllc.com/2012/06/
infant-play-dough-play/, 2012.
[51] Nikhil R Pal and Sankar K Pal. A review on image segmentation techniques. Pattern
recognition, 26(9):12771294, 1993.
[52] Frank Park and Lynch K. Introduction to Robot Mechanics, Planning and Control.
SNU, 2011.
[53] Nick Randolph, David Gardner, Chris Anderson, and Michael Minutillo. Professional
Visual Studio 2010. John Wiley & Sons, 2010.
[54] Eshed Robotec. Scorbot er-4pc: Users manual. Rosh Haayin, Israel, 1982.
83

[55] Stephane Rochat, Estelle Martin, Chantal Piot-Ziegler, Bijan Najafi, Kamiar Aminian,
and Christophe J Bula. Falls self-efficacy and gait performance after gait and balance
training in older people. Journal of the American Geriatrics Society, 56(6):1154
1156, 2008.
[56] Eduardo Sanchez and Marco Tomassini. Towards Evolvable hardware: the evolutionary engineering approach, volume 1062. Springer, 1996.
[57] Rachit Sapra, Michael Mathew, and S Majumder. A solution to inverse kinematics
problem using the concept of sampling importance resampling. In Advanced Computing & Communication Technologies (ACCT), 2014 Fourth International Conference
on, pages 471477. IEEE, 2014.
[58] Stefan Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive
sciences, 3(6):233242, 1999.
[59] Hans-Paul Paul Schwefel. Evolution and optimum seeking: the sixth generation. John
Wiley & Sons, Inc., 1993.
[60] Orlin Sorensen.
Easy Hand-Eye Coordination Exercises to Improve Eyesight.
http://cdn4.rebuildyourvision.com/wp-content/uploads/2013/05/
Easy-Hand-Eye-Coordination-Exercises-to-Improve-Eyesight-300x208.
jpg.
[61] Zygmunt L Szpak, Wojciech Chojnacki, and Anton van den Hengel. Guaranteed
ellipse fitting with the sampson distance. In Computer VisionECCV 2012, pages
87100. Springer, 2012.
[62] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics. MIT
press, 2005.
[63] Mason Woo, Jackie Neider, Tom Davis, and Dave Shreiner. OpenGL programming
guide: the official guide to learning OpenGL, version 1.2. Addison-Wesley Longman
Publishing Co., Inc., 1999.
[64] Ming Xie. A developmental principle for robotic hand-eye coordination skill. In
Development and Learning, 2002. Proceedings. The 2nd International Conference
on, pages 108113. IEEE, 2002.
[65] Zhu Yanhe, Yan Jihong, Zhao Jie, and Cai Hegao. Autonomous kinematic selfcalibration of a novel haptic device. In Intelligent Robots and Systems, 2006 IEEE/RSJ
International Conference on, pages 46544659. IEEE, 2006.

84

List of Publications
Sapra Rachit, Michael Mathew, and S. Majumder. A Solution to Inverse Kinematics
Problem Using the Concept of Sampling Importance Resampling. Advanced Computing & Communication Technologies (ACCT), 2014 Fourth International Conference on. IEEE, 2014

Michael Mathew, Sapra Rachit, and S. Majumder. A Learning Based Approach to


Self Modeling Robots International Conference on Control, Instrumentation, Communication and Computational Technologies (ICCICCT 2014) on. IEEE, 2014

Sapra Rachit, Michael Mathew, and S. Majumder. Efficient computation of Configuration Space transforms for collision-free motion planning International Conference on Control, Instrumentation, Communication and Computational Technologies
(ICCICCT 2014) on. IEEE, 2014

Michael Mathew, Sapra Rachit, and S. Majumder. A Self Modeling Approach in


Robot Manipulators First International Conference on Networks & Soft Computing2014 on. IEEE, 2014

85

You might also like