You are on page 1of 13

THE

BALL FOLLOWER
BOT

AN
AUTONOMOUS IMAGE PROCESSING
PROJECT

Winter Workshop 2009


Technology Robotics Society
IIT Kharagpur
Problem Statement:
Build an autonomous bot that can track a ball' s
motion and follow it.

Group Heads:
Pinaki Ghosh
Rahul Das
Subhagato Dutta

Group Members:
Bharath Mahadevan
Deepit Purkayastha
Hindole Dutta
Nitin Nagwani
Nilanjana Bhattacharya
Aiyush Suhasaria
Anirban Majumdar
Ayan Chattopadhyay

Software Used:
1. Microsoft Visual Studio 2008 :
Microsoft Visual Studio is an Integrated Development Environment (IDE) from
Microsoft. It can be used to develop console user interface applications along
with Windows Forms applications, web sites, web applications, and web
services in both native code together with managed code for all platforms
supported by Microsoft Windows, Windows Mobile, Windows CE, .NET
Framework, .NET Compact Framework and Microsoft Silverlight.

Visual Studio includes a code editor supporting IntelliSense as well as code


refactoring. The integrated debugger works both as a source-level debugger
and a machine-level debugger.

Visual Studio supports languages by means of language services, which allow


any programming language to be supported (to varying degrees) by the code
editor and debugger, provided a language-specific service has been authored.
Built-in languages include C/C++ (via Visual C++), VB.NET (via Visual Basic
.NET), and C# (via Visual C#). Support for other languages such as F#, M,
Python, and Rubyamong others has been made available via language services
which are to be installed separately. It also supports XML/XSLT,HTML/XHTML,
JavaScript and CSS.

2.Microsoft Visual C++ 2005 :


Microsoft Visual C++ (often abbreviated as MSVC) is a commercial
integrated development environment (IDE) product engineered by Microsoftfor
the C, C++, and C++/CLI programming languages. It has tools for developing
and debugging C++ code, especially code written for theMicrosoft Windows
API, the DirectX API, and the Microsoft .NET Framework.

3. OpenCV 1.1Pre1A :
OpenCV is a computer vision library originally developed by Intel. It is free for
use under the open source BSD license. The library is cross-platform. It focuses
mainly on real-time image processing, as such, if it finds Intel's Integrated
Performance Primitives on the system, it will use these commercial optimized
routines to accelerate itself.
One of OpenCV’s goals is to provide a simple-to-use computer vision
infrastructure that helps people build fairly sophisticated vision applications
quickly.

4. Serial Communication:
The header file tserial.h was used to enable serial communication

5. WinAVR:
WinAVR was used to write the operating program for the bot to run on.
Hardware Used:
1. Rectifier Circuit:

The power supply is derived from the AC main supply of 220 V. A rectifier
circuit is used to generate constant DC power supply of 12V to the bot. The
circuit is as

As can be seen, the circuit constitutes:

• A Step Down Transformer


• Two Diodes
• One Capacitor

The circuit basically generates a fully rectified wave out of the input AC wave .
The capacitance is used to smooth out the generated curve. So the waveforms are
as:
Input waveform:

Output waveform:
IC 7812:

The bot requires constant DC supply yet the output from the rectifier circuit still
has ripples. Hence the IC 7812 is used which makes the output exactly 12V.

2. L293D– The Motor Driver Circuit


The L293D circuit is used to run two motors that control the motion of the bot. It
is
a current amplifying circuit. A low current control signal is converted into a higher
current signal which can be used to drive a motor.
The L293D has two H-bridge circuits inbuilt in it. Each H-bridge circuit controls one
motor. Each H bridge circuit is configured as follows:

The direction of motion of the motor is given by the switching on and off of the
switches S1, S2, S3 and S4.
3. Development Board :

We used an ATMEGA16 microcontroller in the development board to control the


motion of the bot.
Some of the important features of the ATMEGA16 are:

• 16-Kbyte self-programming Flash Program Memory,


• 1-Kbyte SRAM,
• 512 Byte EEPROM,
• 8 Channel 10-bit A/D-converter.J
• TAG interface for on-chip-debug.
• Up to 16 MIPS throughput at 16 Mhz.

A MAX232 IC was used to perform the serial communication. The MAX232 is an


integrated circuit that converts signals from an RS-232 serial port to signals
suitable for use in TTL compatible digital logic circuits. The MAX232 is a dual
driver/receiver and typically converts the RX, TX, CTS and RTS signals.

An external 16 MHz crystal is being used to generate a system speed of 16 MHz.

4. Motors:

Two DC motors of 45 rpm each have been used in the bot running on a constant
DC supply of 12V.

5. Camera: iBall Face2Face C8.0

The camera is mounted on the bot to take images of the ball to be followed
continuously.

6. USB 232 Converter:


It allows serial communication between the bot and the laptop.

ALGORITHM:
We use the laptop as the image processing device in order to track the images
that are taken by the camera mounted on our bot. Lets take a look at the step-
wise algorithm now:

1. The camera on the bot begins taking a continuous video stream of the ball
in the form of still frames one after the other.
2. Processing The Frame:
• At each frame, the colour image taken as input is converted into a grayscale
image.
Grayscaling:
Every frame is divided into pixels. Each pixel sends a set of properties which
can be read through functions provided in OpenCV. The RGB values of each
pixel is read by the code and using the cvCvtColor() function we convert the
colour image into gray image.
• In the next step, the grayscale image is converted into a binary image using
the cvThreshold() function.
Setting the Threshold:
The threshold was set as 250 putting it close to the maximum value 255 so
as to make the pixels with intensity value close to 255 prominent.
Making the Binary Image:
The cvCvtColor() function is called with Threshold value 250 and threshold
type CV_THRESH_BINARY. This creates a binary image of the entered frame.
• Now, the unwanted noises in the frame are removed and the largest white
patch in the image is found. The centre of the largest white patch in the
image is computed.

3. Function to find the direction in which the bot should move:

• The distance between the centre of the screen and the centre of the largest
white patch is calculated both in the vertical and horizontal directions.
• If yscreen> ypatch,,, then the bot moves in the backward direction.
• If yscreen< ypatch,,, then the bot moves in the forward direction.
• If xscreen> xpatch,,, then the bot moves in the right direction.
• If xscreen<xpatch,,, then the bot moves in the left direction.

4. If the bot goes out of the field of view of the bot, then it takes a zero
radius turn till it finds the ball again.
5. The code explaining these instructions to the bot are burnt onto the
ATMEGA16.

BUILDING THE BOT:


A Journey Into The Unknown
The domain of Image Processing was a completely new one for us all. Yet, within a
span of 7 days, we were able to put together an autonomous bot capable of
tracking and following a ball. But we did manage to pack in a lot of fun and
learning in the process.

The bot basically was built in the following steps:

1. Building the Rectifier circuit:

We implemented a full wave rectifier circuit. Two diodes were connected to the
two extreme ends of the centre tap transformer while the middle is connected
to ground. A 1000 uF capacitance is connected across the output to generate
an almost constant DC voltage of 12 V. Practically, the output generated is
higher than 12 V (observed to be about 17 V). An IC 7812 is used to give an
exact supply of 12 V to the L293D motor driver circuit.

2. Building the Motor Driver (L293D) circuit:

• The ENABLE1, ENABLE2 and VCC1 were set at 5 V and the VCC2 pin was set
at 12 V.
• Output Pins 3 and 11 are connected to the left motor and pins 4 and 6 are
connected to the right motor.
• Input Pins 2, 7, 10, 15 are connected to the ATMEGA16.

3. ATMEGA16 :
• PB0, PB1, PB2, PB3 are used as the output pins.
• PD4 and PD5 are used for PWM.
• The code was burnt onto the microcontroller using WinAVR Studio.
4. The Bot:
• The bot is a differential drive with autonomous feed to it through the PC
Camera.
• A castor wheel is at the other end to the camera.
5. The Code:
We spent maximum time on writing the code to the bot and thoroughly
enjoyed every moment of it...We began with learning a whole new software
called OpenCV.
We prepared separate modules for grayscaling, binary imaging and
thresholding followed by codes for tracking the bot with varying degrees of
efficiency. In the end, we came with the algorithm that has been stated
above.
We ran into several errors, sometimes silly ones like forgetting to link the
additional dependencies to a program for IP namely cv.lib, cvaux.lib,
highgui.lib and cxcore.lib..
Often there were fatal errors like the heap running out of memory which
certainly slowed us down several times.
Logically too, we had misjudged the required threshold in the initial stages
but finally we got to the best possible solution. We cracked our heads a great
deal over the way to track the ball and solution was found in the form of the
largest continuous white patch in the field of viewi of the bot whose centre
point is tracked continuously.
Interestingly, after a little alteration in the code, the bot can also act as a line
follower, albeit finding the shortest possible path across sharp turns!

Thanks To:
• Our group heads for teaching us so patiently
• GOOGLE for its excellent search system...OpenCV would
have remained a mystery without it.
• Wikipedia for the resources