You are on page 1of 35

Bachelor of Science in Computer Science and Engineering

Obstacle detection and size measurement for autonomous


mobile robot using sensor fusion.

by
Muhammad Wali Ullah Bhuiyan
and
Fayaz Shahdib Chowdhury
Systems and Software Lab(SSL)

Supervised by
Md. Kamrul Hasan, PhD, Assistant Professor, Dept. of CSE



Department of Computer Science and Engineering (CSE)
Islamic University of Technology (IUT)
October, 2012

2 | P a g e

Declaration of Authenticity
This is to certify that the work presented in this thesis is the outcome
of the analysis and investigation carried out by the candidate under
the supervision of Dr. Md. Kamrul Hasan in the Department of
Computer Science and Engineering (CSE), IUT, Gazipur. It is also
declared that neither of this thesis nor any part of this thesis has been
submitted anywhere else for any degree or diploma. Information
derived from the published and unpublished work of others has been
acknowledged in the text and a list of references has been given.

Signature of the Students:

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Muhammad Wali Ullah Bhuiyan
Student ID: 084408
Academic year: 2011-2012
Date:

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Fayaz Shahdib Chowdhury
Student ID: 084430
Academic year: 2011-2012
Date:

Signature of the Supervisor:

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Md. Kamrul Hasan, PhD
Assistant Professor
Department of Computer Science and Engineering
Islamic University of Technology
Date:
3 | P a g e

Table of Contents
Page No.
Acknowledgements ---------------------------------------- 4
Abstract ------------------------------------------------------ 5
1. Introduction ----------------------------------------------- 6
1.1 Motivation ---------------------------------- 7
1.2 Robot Navigation ---------------------------------- 8
1.3 Sensor Fusion ---------------------------------- 9
2. Related Works ----------------------------------------------- 11
3. Proposal ----------------------------------------------- 13
4. Experiment and Results ----------------------------------- 18
4.1 Experiment ---------------------------------------- 18
4.1.1 The Camera ------------------------ 19
4.1.2 Arduino Mega 2560 ------------- 23
4.1.3 Ultrasonic Sensor ------------------ 25
4.2 Results ---------------------------------------------- 29
5. Conclusion ----------------------------------------------- 32
6. Future Prospects ---------------------------------------- 33
References ---------------------------------------- 34
Appendix A ----------------------------------------------- 36
Appendix B ----------------------------------------------- 40
Appendix C ----------------------------------------------- 50

4 | P a g e

Acknowledgements

All praise is for Allah (sbw). Without the grace and blessings of the
Almighty we wouldnt be where we are. We are grateful for the
strength, patience and skill He has given us, without which it would
have been difficult to complete this thesis successfully.

There are several individuals we would like to thank. Their support
and contributions were essential to successful completion of this
thesis.

We would like to thank our supervisor Dr. Md. Kamrul Hasan,
Assistant Professor, Department of CSE, IUT for guiding us through
the research, and inspiring us.

We would also like to thank Mr. Hasan Mahmud, Assistant Professor,
Department of CSE, IUT for helping us in tight spots, giving us
valuable suggestions and advices.

Finally, we would like to thank IUT for providing an excellent
environment for research.



5 | P a g e



Abstract
Different types of sensors are often fused to acquire information
which cannot be acquired by a single sensor alone. Sensor fusion is
particularly applicable for mobile robots for object detection and
navigation. The techniques that have been developed so far for
detecting an obstacle are costly. Hence, a new technique is proposed
which can detect an obstacle, judge its distance and measure the size
of the obstacle using one camera and one ultrasonic sensor. The
proposed technique is less costly than previous techniques both in
terms of economic feasibility and in terms of computation.









6 | P a g e

Chapter 1
Introduction
Fusion of different sensors such as sound sensor, vision sensor,
temperature sensor etc allows the extraction of information which
cannot be acquired by a single sensor. Different type of sensors work
differently and they have their strengths and weaknesses. One sensor
cannot provide all the necessary information. Sensor fusion combines
the strength of different sensor to overcome the drawbacks of the
other. [1,2]
For an autonomous mobile robot sensor fusion is important to
perceive its environment. Sensor fusion allows it perceive its
surroundings like human beings. Human beings use their sense of
vision, sound, smell, touch and taste to understand their surroundings.
Information from one sensor alone is not enough to give accurate
information. A food may look good but without the sense of smell and
taste it cannot be determined whether the food is still edible or not.
It is essential for autonomous robots to have an accurate perception
and understanding of their surroundings. Without knowing their
surroundings it is not possible for it to navigate around. An
autonomous robot moves unsupervised. It obtains information of
surrounding environments using its sensors and decides its course of
action according to its programming without any external help. If the
7 | P a g e

information provided is inaccurate or incomplete, it becomes hard for
the robot decide its next action.

1.1 Motivation

Sensor fusion is an important part in todays everyday life, especially
in the smart world where devices are becoming smarter. Smart
devices require reliable and different types of sensory data, fusing
them to obtain better information regarding their objectives. This is
where sensor fusion comes in, to effectively combine information
from various sensors to give a better knowledge of what is going on.
Sensor fusion is particularly essential for navigation, especially
navigation for autonomous robots. Without sensor fusion it is
impossible for the robot to know where it is, where it needs to go,
where is the path, which path to take or has it reached the destination
yet.
Robotics is a leading branch of engineering which demands
knowledge of hardware, sensors, actuators, and programming. The
result is a system which can be made to do a lot of different things.
However, to develop such a system is expensive and difficult. So, we
have come up with a plan to build an autonomous mobile robot which
is less expensive. A robot has three main different parts preceptors,
processors and actuators. The preceptors are the sensors which
8 | P a g e

provide information about the surrounding environment to the robot.
The processor uses the information to decide the next course of action
and accordingly drive the actuators. Gathering information about the
surroundings is a critical task for any robot. It is important that the
robot gets accurate and complete information from its sensors. This is
where sensor fusion is vital. Data from different sensors are fused to
give useful information to the robot. A good sensor fusion technique
can be used in different applications of robotics, such as industrial
robotics. This is why we are working on sensor fusion.

1.2 Robot Navigation

Robot navigation algorithms are classified as global or local,
depending on surrounding environment. In global navigation, the
environment surrounding the robot is known and the path which
avoids the obstacle is selected. In local navigation, the environment
surrounding the robot is unknown, and sensors are used to detect the
obstacles and avoid collision. [3]

For global navigation, INS (Inertial Navigation System) or odometric
system can be used. [4] INS uses the velocity, orientation, and
direction of the robot to calculate the location of the robot relative to a
starting position. In global environment, where the starting position,
the goal and the obstacles are known, INS can lead a robot to its goal.
9 | P a g e

But a major problem of INS is that it suffers from integration drift:
small errors in the measurements accumulate to larger error in
position. It is like letting a blindfolded man to navigate from point X
to point Y in a known environment. He knows the way but he cannot
see. He has to guess his location and decide the direction to move.
With every guess, every error he makes is cumulated. By the time he
thinks he has reached Y, his actual position may be quite adrift from
Y.

In local environment, the robot does not know about the surrounding
environment aside from its sensor readings. It has to relay on it sensor
for information about its location. Since a single sensor is not capable
of doing the task, sensor fusion rises in importance. Information from
different sensors are obtained and fused to find the location of the
robot, detect obstacle and avoid it.

1.3 Sensor Fusion

There are many different types of sensors available such as, infra-red
sensors, ultrasonic sonar sensors, LIDAR (Light Detecting and
Ranging)[5,6], RADAR (radio detection and ranging)[7-9], vision
sensor etc. For obstacle detection and avoidance many of the above
sensors can be fused to generate a map of the local environment.
Obstacle detection can be classified into two types:

10 | P a g e

1. Ranged-based obstacle detection.

2. Appearance-based obstacle detection.

In ranged-based obstacle detection, sensors scan the area and detect
any obstacle within the range and the sensors also try and calculate
the range between the robot and the obstacle.
In appearance based obstacle detection, the physical appearance of the
obstacle is detected from the environment, usually by image
processing. [10]














11 | P a g e

Chapter 2
Related Works
Using sensor fusion detecting and avoiding obstacle are the key words
for developing Unmanned Ground Vehicles (UGV) program. This
project is developed for US military. The goal of this program is to
drive High Mobility Multipurpose Wheeled Vehicle autonomously on
the road. The abilities of this program are to drive autonomously with
10mph, detect obstacles on the road and avoid them, overall data
collection for the goal. For this program three types of sensors are
used. The sensors are LADAR(Light Detecting and Ranging), Global
positioning system sensor and Inertial Navigation system sensor. Here
detecting obstacle is done by the LADAR sensor. To avoid obstacle is
the algorithmic part. The autonomous vehicle using these sensors is
named NIST HMMWV. This project is very expensive and uses
costly sensors to detect obstacles. [11]

In another research obstacle detection is done by a new Infrared
Sensor. This sensor is suitable for distance estimation and map
building. Amplitude response as a function of distance and angle of
incidence is easily formulated using a model that needs only one
parameter: the IR reection coefficient of the target surface. Once an
object has been modeled and identied, its distance from the IR
sensor can be obtained in successive readings, within 2ms (typical
12 | P a g e

response time). Distance measurements with this sensor can vary
from a few centimeters to1m, with uncertainties ranging from 0.1mm
for near objects to 10 cm for distant objects, being typically 1.2 cm
objects placed at 50 cm. However, with IR sensors the reading from
the sensor is not always linear with the distance, also the reading
varies with the surface of the obstacle. [12]
Another research uses stereo camera and radar to accurately estimate
the location, size, pose, and motion information of a threat vehicle
with respect to a host vehicle. The goal is to detect and avoid potential
collision. To do that first fit the contour of a threat vehicle from stereo
depth information and find the closest point on the contour from the
vision sensor. Then, the fused closest point is obtained by fusing radar
observations and the vision closest point. Next, by translating the
fitted contour to the fused closest point, the fused contour is obtained.
Finally, the fused contour is tracked by using rigid body constraints to
estimate the location, size, pose, and motion of the threat vehicle. The
stereo camera can give a three dimensional perspective, thus a
distance measurement but it is costly, computationally and
economically and not quite suitable for autonomous mobile
robots.[13]




13 | P a g e

Chapter 3
Proposal

We propose a sensor fusion technique which is less costly both in
terms of economically and computationally, that will allow an
autonomous robot to detect an obstacle, find the distance and also
measure the size of the obstacle. Our system uses a camera and an
ultrasonic transceiver device to achieve this. We use the range data
collected by the ultrasonic sensor with the image captured by the
camera for object detection and object size measurement.

Human eye have a fixed angle of vision i.e. the lateral range of area,
covered by the eye is limited to a fixed angle. The same is true for the
camera. The field of view for a camera is also fixed. Everything the
camera sees is squeezed into the image. Although the image has a
fixed resolution, the size of an object in the image varies with respect
to distance from the camera as well as with respect to the size of the
object itself in real life. If the object is placed at two different
distances from the camera in two different images, the appearance of
the object in the image, where the distance between the object and the
camera is less, is larger. If there are two objects of different sizes at
the same distance, in the image it would appear that the larger object
seems larger in the image. This geometric similarity is used to find the
size of an object.
14 | P a g e



A. Object Detection: Using Ultrasonic Sensor

Using an ultrasonic sensor we can easily detect of presence of an
object. Our robot will follow the technique of bat to detect obstacle in
its path. The ultrasonic sensor will always release ultrasonic wave. If
the wave collides with an obstacle in front of the robot, the wave will
bounce back to the sensor. If the receiver receives this reflected wave
then we can be sure that there is an obstacle in front of the robot. The
time difference between the transmission of the ultrasonic wave and
the reception of the reflected wave is calculated. Since we know the
speed of the sound wave, we can calculate the distance of the
obstacle.

distance = speed x time (1)

B. Object Size Measurement

This is done in two parts. The first part consists of taking visual
information of the object. We are using a camera to take images and
we would use various image processing techniques to extract the
object from the image. The second part consists of taking the range
information of the object. We are using an ultrasonic sensor for this
job.
15 | P a g e


For a fixed field of view, the horizontal and vertical distance the
camera can see is constant at a particular distance. If the angle of
vision is know, then we can find the area if we know the distance.

x = distance between camera and object.
h = horizontal viewing length on a 2D plane perpendicular to x.
= horizontal field of view

= 2 tan(

) (2)




16 | P a g e

This distance h is squeezed into the image. If the image is m x n
pixels in size, m is the horizontal pixels and n is the vertical pixels.
The camera can horizontally see as far h in reality. In the image h is
represented as m.

If an object is present at distance x, and the horizontal length of the
object is p in real life, in the image it takes up q pixels. Since there
exists a geometric similarity between the image and the real life
scene, the ratio of is equivalent to. Since we can calculate the values
of q, m, h we can also calculate the value of p.



17 | P a g e

For vertical measurements, we use the same procedure with
different values, the angle of vision becomes (alpha). The equation
for vertical distance is:
= 2 tan(

) (3)
The similarity equation becomes:

(4)

Where s is the vertical height of the object image in terms of pixel and
t is the vertical length of the object in real life.









18 | P a g e

Chapter 4
Experiment and Results
4.1 The Experiments
For the experiment we have used several hardware and software
packages:
List of Hardware:
1. Camera (Logitech webcam)
2. Arduino Mega 2560
3. PING Ultrasonic Sensor
4. Desktop Computer

List of Software and Technique:
1. MATLAB
2. Arduino IDE
3. Arduino Package for MATLAB
4. PING US package for Arduino
5. Image processing algorithms
6. Conversion to greyscale from RGB image[8]
7. Thresholding[9]
8. Noise reduction [10]
a. Closing
b. Opening


19 | P a g e


4.1.1 The Camera

The camera can be a standard web cam of any resolution. However, it
is preferred that the resolution is not too great, since a larger size of
image would require more computation power.
In our experiment we have used a Logitech webcam to take image
and used MATLAB for processing of that image.
In MATLAB we have taken the image as input; we have converted
the colour image to greyscale image for computational simplicity.
Greyscale is an image in which the value of each pixel carries only
intensity information. Images of this sort are composed exclusively of
shades of gray, varying from black at the weakest intensity to white at
the strongest [14]. After conversion to grayscale we have performed
thresholding on the image to separate the object from the back
ground. Thresholding is a simple method of segmentation.
Thresholding can be used to convert greyscale images to binary
images.[15] Following the thresh holding, we perform opening and
closing on the image to eliminate all noises from the image.
The term opening means to perform erosion followed by dilation on
the image and closing is dilation performed before erosion. Opening
removes small objects from the foreground of an image, placing them
in the background, while closing removes small holes in the
foreground. [16] After isolating the object in the image we measure
20 | P a g e

the dimensions of the object in the image. The horizontal length of the
object is found by the equation:

(5)

=

(6)

The vertical height of the object is found by the equation:

(7)

=

(8)












21 | P a g e

The original image:

Figure 3: The Original Image

After conversion to greyscale:

Figure 4: The Image after Grey Scale conversion
22 | P a g e

After thresholding:

Figure 5. Binary Image after thresholding

After closing:

Figure 6: After performing closing operation
23 | P a g e

After opening, the final image:

Figure 7: After performing opening operation

The white rectangle is our object image, from this image the
dimensions can be calculated easily.

4.1.2 Arduino Mega 2560
Arduino is an open-source electronics prototyping platform based on
flexible, easy-to-use hardware and software. It's intended for artists,
designers, hobbyists, and anyone interested in creating interactive
objects or environments.
Arduino can sense the environment by receiving input from a variety
of sensors and can affect its surroundings by controlling lights,
24 | P a g e

motors, and other actuators. The microcontroller on the board is
programmed using the Arduino programming language (based on
Wiring) and the Arduino development environment (based on
Processing). Arduino projects can be stand-alone or they can
communicate with software running on a computer [18].
The Arduino Mega 2560 is a microcontroller board based on the
ATmega2560. It has 54 digital input/output pins (of which 14 can be
used as PWM outputs), 16 analog inputs, 4 UARTs (hardware serial
ports), a 16 MHz crystal oscillator, a USB connection, a power jack,
an ICSP header, and a reset button. It contains everything needed to
support the microcontroller; simply connect it to a computer with a
USB cable or power it with a AC-to-DC adapter or battery to get
started. The Mega is compatible with most shields designed for the
Arduino Duemilanove or Diecimila [19].
Figure 8: Arduino Mega 2560 Board
25 | P a g e


4.1.3 The Ultrasonic Sensor


Figure 9: TS601-01 Ultrasonic sensor

For detecting the range of the obstacle we would use an ultrasonic transceiver device. The
device has a transducer and a receiver. The ultrasonic sensor detects objects by emitting a
short ultrasonic burst and then "listening" for the echo. Under control of a host
microcontroller (trigger pulse), the sensor emits a short 40 kHz (ultrasonic) burst. This burst
travels through the air at about 1130 feet per second, hits an object and then bounces back to
the sensor. The ultrasonic sensor provides an output pulse to the host that will terminate when
the echo is detected; hence the width of this pulse corresponds to the distance to the
target.[17]
26 | P a g e


Figure 10: Workings of Ultrasonic Sensor














4.1.4 The Setup
Figure 11: The arrangement of hardware
The ultra sonic sensor is serially connected to the Arduino.
and the ultrasonic sensor do not communicate directly but through
Arduino. Arduino act as a platform for communication between the
ultrasonic sensor and the PC. The ultrasonic sensor sends pulse
duration to the Arduino, which converts this into distance v
sends it to the PC through USB. The webcam is directly connected to
the PC through USB connection.
We are using the MATLAB software to control the webcam and
ultrasonic sensor, gather data and fuse the information. However to
gather data from the ultrasonic sensor, we first need to communicate
with Arduino from MATLAB. In order to do it we first need to install
the Arduino package for MATLAB and burn the code for MATLAB
communication in Arduino. Since, Arduino directly controls the
The Setup
ure 11: The arrangement of hardware

ultra sonic sensor is serially connected to the Arduino.
and the ultrasonic sensor do not communicate directly but through
Arduino. Arduino act as a platform for communication between the
ultrasonic sensor and the PC. The ultrasonic sensor sends pulse
duration to the Arduino, which converts this into distance v
sends it to the PC through USB. The webcam is directly connected to
the PC through USB connection.
We are using the MATLAB software to control the webcam and
ultrasonic sensor, gather data and fuse the information. However to
e ultrasonic sensor, we first need to communicate
with Arduino from MATLAB. In order to do it we first need to install
the Arduino package for MATLAB and burn the code for MATLAB
communication in Arduino. Since, Arduino directly controls the
27 | P a g e

ultra sonic sensor is serially connected to the Arduino. The PC
and the ultrasonic sensor do not communicate directly but through
Arduino. Arduino act as a platform for communication between the
ultrasonic sensor and the PC. The ultrasonic sensor sends pulse
duration to the Arduino, which converts this into distance values and
sends it to the PC through USB. The webcam is directly connected to
We are using the MATLAB software to control the webcam and
ultrasonic sensor, gather data and fuse the information. However to
e ultrasonic sensor, we first need to communicate
with Arduino from MATLAB. In order to do it we first need to install
the Arduino package for MATLAB and burn the code for MATLAB
communication in Arduino. Since, Arduino directly controls the
28 | P a g e

ultrasonic sensor, which means the code to control the PING sensor
must also be burned in Arduino. We use a tweaked Arduino code for
PING to enable direct access from MATLAB.
Once we have burned and compiled the proper codes, we can easily
access both the webcam and the ultrasonic sensor from MATLAB as
if they were directly connected to it. Fusing the distance values from
the ultrasonic sensor and image information from the webcam, we can
deduce the size and distance of the detected object.






















29 | P a g e

4.2 Results
When we run the fused sensors using Matlab, we figured out clusters
of information about the object. Our system receives the information
about how far the object is, object width, object, and object height. In
each second the system receives six sets of reading about the object.
We ran the program for 50 seconds for each experiment. We used
objects of different dimensions for each experiment. We got 290
reading about the distance and size of the object. There were some
rough values, which were outliers and we ignored them for better
accuracy.

Result Table:
Real
Width(cm)
Real
Height(cm)
Average
Obtained
Width(cm)
Average
Obtained
Height(cm)
Error in
Width(%)
Error in
Height(%)
1 10.5 7.2 9.92 6.33 5.48 11.98
2 6.8 3.2 6.13 2.54 9.85 20.6
3 5.7 5.1 5.26 4.75 7.72 6.86
4 18 12 16.80 11.00 6.67 8.33
5 16 14 15.21 12.67 4.93 9.50
6 13.5 6.4 12.92 5.68 4.29 11.25
7 24 16 21.84 14.71 9.0 8.0
8 9.1 6.1 8.6 5.9 5.49 3.2
Table 1: Experiment results with different rectangular objects

30 | P a g e



Result Graphs:

Figure 12: Distance Vs Dimensions graph for a single experiment





31 | P a g e




Figure 13: Percentage error in dimensions vs distance for a single
experiment




32 | P a g e

Chapter 5
Conclusion

Advantages
We use a new technique for obstacle detection and recognition.
One of the prime advantage of our thesis is that this is cheaper than
other object detection robots those were developed previously.
Moreover, using this system the robot will be able to measure the
size of an obstacle which it will detect. The accuracy of the system
is reasonably high, the output of the system is quite acceptable.

Problems We Faced
1. Lack of Resources: It was very difficult for us to configure the
sensors in a common platform with the system. We used various
circuits but in vein. Arduino solves the problem for us.
2. Image Processing: We used various image processing
techniques. For that, the system is not fast enough. Besides, the
accuracy of the system depends on the environment.
3. Error: The error in length and width is the influence of several
factors, the error in calculating the distance is a key factor, and
also the distortion of the lens plays a key role in determining the
size of the object. Some information about the object might have
been lost when applying different image processing techniques.
Regardless, the error percentage is small enough to be
acceptable.
33 | P a g e

Chapter 6
Future Prospects

The future prospect of the project includes improving the accuracy of
the system.
We will use more efficient image processing techniques and
algorithms to reduce the computational complexity and to detect and
measure the size of an object more accurately. Different algorithms
will allow us to work on colour image domain, we would be able
detect, identify and track objects better.
We can introduce machine learning, so that the robot can learn by
itself and navigate around without colliding with obstacles. The robot
will learn to identify obstacles and objects.
Various types of sensors and system can be introduced with this
system such as Accelerometer, GPS, Pattern recognition system etc.
These allow the system to act like a powerful robot.






34 | P a g e

References
1. D. L. Hall and J. Llinas, A challenge for the data fusion community I:
Research imperatives for improved processing, in Proc. 7th Natl. Symp.
on Sensor Fusion, Albuquerque, NM, Mar. 1994.
2. J. Llinas and D. L. Hall, A challenge for the data fusion community II:
Infrastructure imperatives, in Proc. 7th Natl. Symp. on Sensor Fusion,
Albuquerque, NM, Mar. 1994.
3. R. Abiyev, D. Ibrahim , B. Erin, Navigation of mobile robots in the
presence of obstacles, Near East University, Department of Computer
Engineering, Mersin 10, Turkey.
4. E. v. Hinderer (iMAR Navigation). "Design of an Unaided Aircraft
Attitude Reference System with Medium Accurate Gyroscopes for
Higher Performance Attitude Requirements". Inertial Sensors and
Systems - Symposium Gyro Technology, Karlsruhe / Germany (iMAR
Navigation / DGON) 2011.
5. Li,, T. et al. "Middle atmosphere temperature trend and solar cycle
revealed by long-term Rayleigh lidar observations". J. Geophys. Res. 116.
2011
6. Thomas D. Wilkerson, Geary K. Schwemmer, and Bruce M. Gentry.
LIDAR Profiling of Aerosols, Clouds, and Winds by Doppler and Non-
Doppler Methods, NASA International H2O Project (2002).
7. R. V. Jones (1998-08). Most Secret War. Wordsworth Editions Ltd.
8. Kaiser, Gerald, Chapter 10 in "A Friendly Guide to Wavelets",
Birkhauser, Boston, 1994.
9. Kouemou, Guy (Ed.): Radar Technology. InTech, (2010)
10. Iwan Ulrich and Illah Nourbakhsh, Appearance-Based Obstacle
Detection with Monocular Color Vision, Proceedings of the AAAI
35 | P a g e

National Conference on Artificial Intelligence, Austin, TX, July/August
2000.
11. Tsai-Hong Hong, Steven Legowik, and Marilyn Nashman, Obstacle
Detection and Mapping System Intelligent Systems Division, National
Institute of Standards and Technology (NIST)
12. G. Benet, F. Blanes, J.E. Sim, P. Prez.,Using infrared sensors for
distance measurement in mobile robots, Departamento de Informtica de
Sistemas, Computadores y Automtica,Universidad Politcnica de
Valencia, P.O. Box 22012, 46080 Valencia, Spain Received 9 August
2001; received in revised form 27 March 2002 Communicated by F.C.A.
Groen.
13. Shunguang Wu, Member, IEEE, Stephen Decker, Member, IEEE, Peng
Chang, Member, IEEE, Theodore Camus, Senior Member, IEEE, and
Jayan Eledath, Member, IEEE, Collision Sensing by Stereo Vision and
Radar Sensor Fusion, . IEEE TRANSACTIONS ON INTELLIGENT
TRANSPORTATION SYSTEMS, VOL. 10, NO. 4, DECEMBER 2009
14. Stephen Johnson (2006). Stephen Johnson on Digital Photography.
O'Reilly. ISBN 059652370X.
15. Gonzalez, Rafael C. & Woods, Richard E. (2002). Thresholding. In
Digital Image Processing, pp. 595611. Pearson Education.
16. http://en.wikipedia.org/wiki/Opening_morphology
17. Parallax, Inc. PING)))TM Ultrasonic Distance Sensor (#28015) v1.3
6/13/2006
18. http://www.arduino.cc/
19. http://arduino.cc/en/Main/ArduinoBoardMega2560

You might also like