You are on page 1of 36

Fr

This book will walk you through all the building blocks
needed to build amazing computer vision applications with
ease. It starts off with applying geometric transformations
to images. It then discusses affine and projective
transformations and how to use them to apply cool geometric
effects to photos. It then covers techniques used for object
recognition, 3D reconstruction, stereo imaging, and other
computer vision applications.
This book also provides clear examples written in Python
for building OpenCV applications. The book starts off with
simple beginner-level tasks, such as the basic processing
and handling of images, image mapping, and detecting
images. It also covers popular OpenCV libraries with the
help of examples.
This book is a practical tutorial that covers various examples
at different levels, teaching you about the different functions
of OpenCV and their actual implementation.

Who this book is written for

Apply geometric transformations to images,


perform image filtering, and convert an image
into a cartoon-like image
Detect and track various body parts such as
faces, noses, eyes, ears, and mouths
Stitch multiple images of a scene together to
create a panoramic image
Make an object disappear from an image
Identify different shapes, segment an image,
and track an object in a live video
Recognize an object in an image and build a
visual search engine
Reconstruct a 3D map from images
Build an augmented reality application

$ 49.99 US
31.99 UK

community experience distilled

P U B L I S H I N G

Prateek Joshi

This book is intended for Python developers who are new to


OpenCV and want to develop computer vision applications
with OpenCV-Python. This book is also useful for software
developers who want to deploy computer vision applications
on the cloud. It would be helpful to have some familiarity
with basic mathematical concepts such as vectors, matrices,
and so on.

What you will learn from this book

OpenCV with Python By Example

OpenCV with Python


By Example

ee

pl

C o m m u n i t y

E x p e r i e n c e

D i s t i l l e d

OpenCV with Python


By Example
Build real-world computer vision applications and develop
cool demos using OpenCV for Python

Prices do not include


local sales tax or VAT
where applicable

Visit www.PacktPub.com for books, eBooks,


code, downloads, and PacktLib.

Sa
m

Prateek Joshi

In this package, you will find:

The author biography


A preview chapter from the book, Chapter 1 'Applying Geometric
Transformations to Images'
A synopsis of the books content
More information on OpenCV with Python By Example

About the Author


Prateek Joshi is a computer vision researcher with a primary focus on content-based

analysis. He is particularly interested in intelligent algorithms that can understand


images to produce scene descriptions in terms of constituent objects. He has a master's
degree from the University of Southern California, specializing in computer vision. He
was elected to become a member of the Honor Society for academic excellence and an
ambassador for the School of Engineering. Over the course of his career, he has worked
for companies such as Nvidia, Microsoft Research, Qualcomm, and a couple of early
stage start-ups in Silicon Valley.
His work in this field has resulted in multiple patents, tech demos, and research
papers at major IEEE conferences. He has won many hackathons using a wide
variety of technologies related to image recognition. He enjoys blogging about
topics such as artificial intelligence, abstract mathematics, and cryptography.
His blog has been visited by users in more than 200 countries, and he has been
featured as a guest author in prominent tech magazines.

Preface
Computer vision is found everywhere in modern technology. OpenCV for
Python enables us to run computer vision algorithms in real time. With the advent
of powerful machines, we are getting more processing power to work with. Using
this technology, we can seamlessly integrate our computer vision applications into
the cloud. Web developers can develop complex applications without having to
reinvent the wheel. This book is a practical tutorial that covers various examples
at different levels, teaching you about the different functions of OpenCV and their
actual implementations.

What this book covers


Chapter 1, Applying Geometric Transformations to Images, explains how to apply
geometric transformations to images. In this chapter, we will discuss affine and
projective transformations, and see how we can use them to apply cool geometric
effects to photos. The chapter will begin with the procedure to install OpenCVPython on multiple platforms such as Mac OS X, Linux, and Windows. We will
also learn how to manipulate an image in various ways, such as resizing, changing
color spaces, and so on.
Chapter 2, Detecting Edges and Applying Image Filters, shows how to use fundamental
image-processing operators and how we can use them to build bigger projects.
We will discuss why we need edge detection and how it can be used in various
different ways in computer vision applications. We will discuss image filtering
and how we can use it to apply various visual effects to photos.

Preface

Chapter 3, Cartoonizing an Image, shows how to cartoonize a given image using image
filters and other transformations. We will see how to use the webcam to capture a
live video stream. We will discuss how to build a real-time application, where we
extract information from each frame in the stream and display the result.
Chapter 4, Detecting and Tracking Different Body Parts, shows how to detect and track
faces in a live video stream. We will discuss the face detection pipeline and see how
we can use it to detect and track different body parts, such as eyes, ears, mouth, nose,
and so on.
Chapter 5, Extracting Features from an Image, is about detecting the salient points (called
keypoints) in an image. We will discuss why these salient points are important and
how we can use them to understand the image content. We will talk about the different
techniques that can be used to detect salient points and extract features from an image.
Chapter 6, Creating a Panoramic Image, shows how to create a panoramic image by
stitching multiple images of the same scene together.
Chapter 7, Seam Carving, shows how to do content-aware image resizing. We will
discuss how to detect "interesting" parts in an image and see how we can resize a
given image without deteriorating those interesting parts.
Chapter 8, Detecting Shapes and Segmenting an Image, shows how to perform image
segmentation. We will discuss how to partition a given image into its constituent
parts in the best possible way. You will also learn how to separate the foreground
from the background in an image.
Chapter 9, Object Tracking, shows you how to track different objects in a live video
stream. At the end of this chapter, you will be able to track any object in a live video
stream that is captured through the webcam.
Chapter 10, Object Recognition, shows how to build an object recognition system.
We will discuss how to use this knowledge to build a visual search engine.
Chapter 11, Stereo Vision and 3D Reconstruction, shows how to reconstruct the depth
map using stereo images. You will learn how to achieve a 3D reconstruction of a
scene from a set of images.
Chapter 12, Augmented Reality, shows how to build an augmented reality application.
By the end of this chapter, you will be able to build a fun augmented reality project
using the webcam.

Applying Geometric
Transformations to Images
In this chapter, we are going to learn how to apply cool geometric effects to images.
Before we get started, we need to install OpenCV-Python. We will discuss how to
install the necessary tools and packages as well.
By the end of this chapter, you will know:

How to install OpenCV-Python

How to read, display, and save images

How to convert between multiple color spaces

How to apply geometric transformations like translation, rotation,


and scaling

How to use affine and projective transformations to apply funny geometric


effects on photos

Installing OpenCV-Python
Let's see how to install OpenCV with Python support on multiple platforms.

Windows
In order to get OpenCV-Python up and running, we need to perform the
following steps:
1. Install Python: Make sure you have Python 2.7.x installed on your machine.
If you don't have it, you can install it from https://www.python.org/
downloads/windows/
[1]

Applying Geometric Transformations to Images

2. Install NumPy: NumPy is a great package to do numerical computing in


Python. It is very powerful and has a wide variety of functions. OpenCVPython plays nicely with NumPy, and we will be using this package a
lot, during the course of this book. You can install the latest version from
http://sourceforge.net/projects/numpy/files/NumPy/

We need to install all these packages in their default locations. Once we install
Python and NumPy, we need to ensure that they're working fine. Open up the
Python shell and type the following:
>>> import numpy

If the installation has gone well, this shouldn't throw any error. Once you confirm it,
you can go ahead and download the latest OpenCV version from http://opencv.
org/downloads.html

Once you finish downloading it, double-click to install it. We need to make a couple
of changes, as follows:
1. Navigate to opencv/build/python/2.7/
2. You will see a file named cv2.pyd. Copy this file to C:/Python27/lib/
site-packages

You're all set! Let's make sure that OpenCV is working. Open up the Python shell
and type the following:
>>> import cv2

If you don't see any errors, then you are good to go! You are now ready to use
OpenCV-Python.

Mac OS X
To install OpenCV-Python, we will be using Homebrew. Homebrew is a great
package manager for Mac OS X and it will come in handy when you are installing
various libraries and utilities on OS X. If you don't have Homebrew, you can install
it by running the following command on your terminal:
$ ruby -e "$(curl -fsSL
https://raw.githubusercontent.com/Homebrew/install/master/install)"

Even though OS X comes with inbuilt Python, we need to install Python using
Homebrew to make our lives easier. This version is called brewed Python. Once you
install Homebrew, the next step is to install brewed Python. Open up the terminal
and type the following:
$ brew install python
[2]

Chapter 1

This will automatically install pip as well. Pip is a package management tool to
install packages in Python and we will be using it to install other packages. Let's
make sure the brewed Python is working correctly. Go to your terminal and type
the following:
$ which python

You should see /usr/local/bin/python printed on the terminal. This means that
we are using the brewed Python and not the inbuilt system Python. Now that we
have installed brewed Python, we can go ahead and add the repository, homebrew/
science, which is where OpenCV is located. Open the terminal and run the
following command:
$ brew tap homebrew/science

Make sure the package NumPy is installed. If not, run the following in your terminal:
$ pip install numpy

Now, we are ready to install OpenCV. Go ahead and run the following command
from your terminal:
$ brew install opencv --with-tbb --with-opengl

OpenCV is now installed on your machine and you can find it at /usr/local/
Cellar/opencv/2.4.9/. You can't use it just yet. We need to tell Python where to
find our OpenCV packages. Let's go ahead and do that by symlinking the OpenCV
files. Run the following commands from your terminal:
$ cd /Library/Python/2.7/site-packages/
$ ln -s /usr/local/Cellar/opencv/2.4.9/lib/python2.7/site-packages/cv.py
cv.py
$ ln -s /usr/local/Cellar/opencv/2.4.9/lib/python2.7/site-packages/cv2.so
cv2.so

You're all set! Let's see if it's installed properly. Open up the Python shell and type
the following:
>>> import cv2

If the installation went well, you will not see any error message. You are now ready
to use OpenCV in Python.

[3]

Applying Geometric Transformations to Images

Linux (for Ubuntu)


Before we start, we need to install some dependencies. Let's install them using the
package manager as shown below:
$ sudo apt-get -y install libopencv-dev build-essential cmake
libdc1394-22 libdc1394-22-dev libjpeg-dev libpng12-dev
libtiff4-dev libjasper-dev libavcodec-dev libavformat-dev
libswscale-dev libxine-dev libgstreamer0.10-dev
libgstreamer-plugins-base0.10-dev libv4l-dev libtbb-dev libqt4-dev
libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev
libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils
python-scipy python-pip python-virtualenv

Now that you have installed the necessary packages, let's go ahead and build
OpenCV with Python support:
$ wget "https://github.com/Itseez/opencv/archive/2.4.9.tar.gz" -O
./opencv/opencv.tar.gz
$ cd opencv
$ tar xvzf opencv.tar.gz -C .
$ mkdir release
$ cd release
$ sudo apt-get y install cmake
$ cmake -D CMAKE_BUILD_TYPE=RELEASE -D
CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON -D
WITH_XINE=ON -D WITH_OPENGL=ON -D WITH_TBB=ON -D WITH_EIGEN=ON -D
BUILD_EXAMPLES=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON
../
$ make j4
$ sudo make install

Let's make sure that it's installed correctly. Open up the Python shell and type the
following:
>>> import cv2

If you don't see any errors, you are good to go.


If you have any other Linux distribution, please refer to the OpenCV downloads
page (http://opencv.org/downloads.html) for installation details.

[4]

Chapter 1

Reading, displaying, and saving images


Let's see how we can load an image in OpenCV-Python. Create a file named
first_program.py and open it in your favorite code editor. Create a folder
named images in the current folder and make sure that you have an image
named input.jpg in that folder.
Once you do that, add the following lines to that Python file:
import cv2
img = cv2.imread('./images/input.jpg')
cv2.imshow('Input image', img)
cv2.waitKey()

If you run the preceding program, you will see an image being displayed in a
new window.

What just happened?


Let's understand the previous piece of code, line by line. In the first line, we are
importing the OpenCV library. We need this for all the functions we will be using
in the code. In the second line, we are reading the image and storing it in a variable.
OpenCV uses NumPy data structures to store the images. You can learn more about
NumPy at http://www.numpy.org
So if you open up the Python shell and type the following, you will see the datatype
printed on the terminal:
>>> import cv2
>>> img = cv2.imread('./images/input.jpg')
>>> type(img)
<type 'numpy.ndarray'>

In the next line, we display the image in a new window. The first argument in cv2.
imshow is the name of the window. The second argument is the image you want
to display.
You must be wondering why we have the last line here. The function, cv2.
waitKey(), is used in OpenCV for keyboard binding. It takes a number as an
argument, and that number indicates the time in milliseconds. Basically, we use
this function to wait for a specified duration, until we encounter a keyboard event.
The program stops at this point, and waits for you to press any key to continue.
If we don't pass any argument or if we pass 0 as the argument, this function will
wait for a keyboard event indefinitely.
[5]

Applying Geometric Transformations to Images

Loading and saving an image


OpenCV provides multiple ways of loading an image. Let's say we want to load a
color image in grayscale mode. We can do that using the following piece of code:
import cv2
gray_img = cv2.imread('images/input.jpg', cv2.IMREAD_GRAYSCALE)
cv2.imshow('Grayscale', gray_img)
cv2.waitKey()

Here, we are using the flag cv2.IMREAD_GRAYSCALE to load the image in grayscale
mode. You can see that from the image being displayed in the new window. Next,
is the input image:

[6]

Chapter 1

Following is the corresponding grayscale image:

We can save this image into a file as well:


cv2.imwrite('images/output.jpg', gray_img)

This will save the grayscale image into an output file named output.jpg. Make
sure you get comfortable with reading, displaying, and saving images in OpenCV,
because we will be doing this quite a bit during the course of this book.

Image color spaces


In computer vision and image processing, color space refers to a specific way of
organizing colors. A color space is actually a combination of two things: a color
model and a mapping function. The reason we want color models is because it
helps us in representing pixel values using tuples. The mapping function maps
the color model to the set of all possible colors that can be represented.

[7]

Applying Geometric Transformations to Images

There are many different color spaces that are useful. Some of the more popular color
spaces are RGB, YUV, HSV, Lab, and so on. Different color spaces provide different
advantages. We just need to pick the color space that's right for the given problem.
Let's take a couple of color spaces and see what information they provide:

RGB: It's probably the most popular color space. It stands for Red, Green,
and Blue. In this color space, each color is represented as a weighted
combination of red, green, and blue. So every pixel value is represented as
a tuple of three numbers corresponding to red, green, and blue. Each value
ranges between 0 and 255.

YUV: Even though RGB is good for many purposes, it tends to be very
limited for many real life applications. People started thinking about
different methods to separate the intensity information from the color
information. Hence, they came up with the YUV color space. Y refers to the
luminance or intensity, and U/V channels represent color information. This
works well in many applications because the human visual system perceives
intensity information very differently from color information.

HSV: As it turned out, even YUV was still not good enough for some of the
applications. So people started thinking about how humans perceive color
and they came up with the HSV color space. HSV stands for Hue, Saturation,
and Value. This is a cylindrical system where we separate three of the most
primary properties of colors and represent them using different channels.
This is closely related to how the human visual system understands color.
This gives us a lot of flexibility as to how we can handle images.

Converting between color spaces


Considering all the color spaces, there are around 190 conversion options available
in OpenCV. If you want to see a list of all available flags, go to the Python shell and
type the following:
>>> import cv2
>>> print [x for x in dir(cv2) if x.startswith('COLOR_')]

You will see a list of options available in OpenCV for converting from one color
space to another. We can pretty much convert any color space into any other color
space. Let's see how we can convert a color image into a grayscale image:
import cv2
img = cv2.imread('./images/input.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow('Grayscale image', gray_img)
cv2.waitKey()
[8]

Chapter 1

What just happened?


We use the function cvtColor to convert between color spaces. The first argument
is the input image and the second argument specifies the color space conversion.
You can convert to YUV by using the following flag:
yuv_img = cv2.cvtColor(img, cv2.COLOR_BGR2YUV)

The image will look something like the following one:

This may look like a deteriorated version of the original image, but it's not.
Let's separate out the three channels:
cv2.imshow('Y channel', yuv_img[:, :, 0])
cv2.imshow('U channel', yuv_img[:, :, 1])
cv2.imshow('V channel', yuv_img[:, :, 2])
cv2.waitKey()

[9]

Applying Geometric Transformations to Images

Since yuv_img is a numPy array, we can separate out the three channels by slicing
it. If you look at yuv_img.shape, you will see that it is a 3D array whose dimensions
are NUM_ROWS x NUM_COLUMNS x NUM_CHANNELS. So once you run the preceding
piece of code, you will see three different images. Following is the Y channel:

The Y channel is basically the grayscale image. Next is the U channel:

[ 10 ]

Chapter 1

And lastly, the V channel:

As we can see here, the Y channel is the same as the grayscale image. It represents the
intensity values. The U and V channels represent the color information.
Let's convert to HSV and see what happens:
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
cv2.imshow('HSV image', hsv_img)

[ 11 ]

Applying Geometric Transformations to Images

Again, let's separate the channels:


cv2.imshow('H channel', hsv_img[:, :, 0])
cv2.imshow('S channel', hsv_img[:, :, 1])
cv2.imshow('V channel', hsv_img[:, :, 2])
cv2.waitKey()

If you run the preceding piece of code, you will see three different images. Take a
look at the H channel:

Next is the S channel:

[ 12 ]

Chapter 1

Following is the V channel:

This should give you a basic idea of how to convert between color spaces.
You can play around with more color spaces to see what the images look like.
We will discuss the relevant color spaces as and when we encounter them
during subsequent chapters.

Image translation
In this section, we will discuss about shifting an image. Let's say we want to move
the image within our frame of reference. In computer vision terminology, this is
referred to as translation. Let's go ahead and see how we can do that:
import cv2
import numpy as np
img = cv2.imread('images/input.jpg')
num_rows, num_cols = img.shape[:2]
translation_matrix = np.float32([ [1,0,70], [0,1,110] ])
img_translation = cv2.warpAffine(img, translation_matrix, (num_cols,
num_rows))
cv2.imshow('Translation', img_translation)
cv2.waitKey()

[ 13 ]

Applying Geometric Transformations to Images

If you run the preceding code, you will see something like the following:

What just happened?


To understand the preceding code, we need to understand how warping works.
Translation basically means that we are shifting the image by adding/subtracting the
X and Y coordinates. In order to do this, we need to create a transformation matrix,
as shown as follows:

Here, the tx and ty values are the X and Y translation values, that is, the image will
be moved by X units towards the right, and by Y units downwards. So once we create
a matrix like this, we can use the function, warpAffine, to apply to our image. The
third argument in warpAffine refers to the number of rows and columns in the
resulting image. Since the number of rows and columns is the same as the original
image, the resultant image is going to get cropped. The reason for this is because
we didn't have enough space in the output when we applied the translation matrix.
To avoid cropping, we can do something like this:
img_translation = cv2.warpAffine(img, translation_matrix,
(num_cols + 70, num_rows + 110))
[ 14 ]

Chapter 1

If you replace the corresponding line in our program with the preceding line,
you will see the following image:

Let's say you want to move the image in the middle of a bigger image frame; we can
do something like this by carrying out the following:
import cv2
import numpy as np
img = cv2.imread('images/input.jpg')
num_rows, num_cols = img.shape[:2]
translation_matrix = np.float32([ [1,0,70], [0,1,110] ])
img_translation = cv2.warpAffine(img, translation_matrix, (num_cols +
70, num_rows + 110))
translation_matrix = np.float32([ [1,0,-30], [0,1,-50] ])
img_translation = cv2.warpAffine(img_translation, translation_matrix,
(num_cols + 70 + 30, num_rows + 110 + 50))
cv2.imshow('Translation', img_translation)
cv2.waitKey()

[ 15 ]

Applying Geometric Transformations to Images

If you run the preceding code, you will see an image like the following:

Image rotation
In this section, we will see how to rotate a given image by a certain angle. We can do
it using the following piece of code:
import cv2
import numpy as np
img = cv2.imread('images/input.jpg')
num_rows, num_cols = img.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((num_cols/2, num_rows/2),
30, 1)
img_rotation = cv2.warpAffine(img, rotation_matrix, (num_cols, num_
rows))
cv2.imshow('Rotation', img_rotation)
cv2.waitKey()
[ 16 ]

Chapter 1

If you run the preceding code, you will see an image like this:

What just happened?


In order to understand this, let's see how we handle rotation mathematically.
Rotation is also a form of transformation, and we can achieve it by using the
following transformation matrix:

Here, is the angle of rotation in the counterclockwise direction. OpenCV


provides closer control over the creation of this matrix through the function,
getRotationMatrix2D. We can specify the point around which the image
would be rotated, the angle of rotation in degrees, and a scaling factor for the
image. Once we have the transformation matrix, we can use the warpAffine
function to apply this matrix to any image.

[ 17 ]

Applying Geometric Transformations to Images

As we can see from the previous figure, the image content goes out of boundary
and gets cropped. In order to prevent this, we need to provide enough space in
the output image. Let's go ahead and do that using the translation functionality
we discussed earlier:
import cv2
import numpy as np
img = cv2.imread('images/input.jpg')
num_rows, num_cols = img.shape[:2]
translation_matrix = np.float32([ [1,0,int(0.5*num_cols)],
[0,1,int(0.5*num_rows)] ])
2*num_cols, 2*num_rows))
rotation_matrix = cv2.getRotationMatrix2D((num_cols, num_rows), 30,
img_translation = cv2.warpAffine(img, translation_matrix, (1)
img_rotation = cv2.warpAffine(img_translation, rotation_matrix,
(2*num_cols, 2*num_rows))
cv2.imshow('Rotation', img_rotation)
cv2.waitKey()

If we run the preceding code, we will see something like this:

[ 18 ]

Chapter 1

Image scaling
In this section, we will discuss about resizing an image. This is one of the most
common operations in computer vision. We can resize an image using a scaling
factor, or we can resize it to a particular size. Let's see how to do that:
img_scaled = cv2.resize(img,None,fx=1.2, fy=1.2, interpolation =
cv2.INTER_LINEAR)
cv2.imshow('Scaling - Linear Interpolation', img_scaled)
img_scaled = cv2.resize(img,None,fx=1.2, fy=1.2, interpolation =
cv2.INTER_CUBIC)
cv2.imshow('Scaling - Cubic Interpolation', img_scaled)
img_scaled = cv2.resize(img,(450, 400), interpolation = cv2.INTER_
AREA)
cv2.imshow('Scaling - Skewed Size', img_scaled)
cv2.waitKey()

What just happened?


Whenever we resize an image, there are multiple ways to fill in the pixel values.
When we are enlarging an image, we need to fill up the pixel values in between pixel
locations. When we are shrinking an image, we need to take the best representative
value. When we are scaling by a non-integer value, we need to interpolate values
appropriately, so that the quality of the image is maintained. There are multiple
ways to do interpolation. If we are enlarging an image, it's preferable to use linear or
cubic interpolation. If we are shrinking an image, it's preferable to use the area-based
interpolation. Cubic interpolation is computationally more complex, and hence
slower than linear interpolation. But the quality of the resulting image will be higher.

[ 19 ]

Applying Geometric Transformations to Images

OpenCV provides a function called resize to achieve image scaling. If you don't
specify a size (by using None), then it expects the X and Y scaling factors. In our
example, the image will be enlarged by a factor of 1.2. If we do the same enlargement
using cubic interpolation, we can see that the quality improves, as seen in the following
figure. The following screenshot shows what linear interpolation looks like:

Here is the corresponding cubic interpolation:

[ 20 ]

Chapter 1

If we want to resize it to a particular size, we can use the format shown in the last
resize instance. We can basically skew the image and resize it to whatever size we
want. The output will look something like the following:

Affine transformations
In this section, we will discuss about the various generalized geometrical
transformations of 2D images. We have been using the function warpAffine
quite a bit over the last couple of sections, it's about time we understood what's
happening underneath.
Before talking about affine transformations, let's see what Euclidean transformations
are. Euclidean transformations are a type of geometric transformations that preserve
length and angle measure. As in, if we take a geometric shape and apply Euclidean
transformation to it, the shape will remain unchanged. It might look rotated, shifted,
and so on, but the basic structure will not change. So technically, lines will remain
lines, planes will remain planes, squares will remain squares, and circles will
remain circles.
Coming back to affine transformations, we can say that they are generalizations of
Euclidean transformations. Under the realm of affine transformations, lines will
remain lines but squares might become rectangles or parallelograms. Basically,
affine transformations don't preserve lengths and angles.

[ 21 ]

Applying Geometric Transformations to Images

In order to build a general affine transformation matrix, we need to define the control
points. Once we have these control points, we need to decide where we want them
to be mapped. In this particular situation, all we need are three points in the source
image, and three points in the output image. Let's see how we can convert an image
into a parallelogram-like image:
import cv2
import numpy as np
img = cv2.imread('images/input.jpg')
rows, cols = img.shape[:2]
src_points = np.float32([[0,0], [cols-1,0], [0,rows-1]])
dst_points = np.float32([[0,0], [int(0.6*(cols-1)),0], [int(0.4*(cols1)),rows-1]])
affine_matrix = cv2.getAffineTransform(src_points, dst_points)
img_output = cv2.warpAffine(img, affine_matrix, (cols,rows))
cv2.imshow('Input', img)
cv2.imshow('Output', img_output)
cv2.waitKey()

What just happened?


As we discussed earlier, we are defining control points. We just need three points to
get the affine transformation matrix. We want the three points in src_points to be
mapped to the corresponding points in dst_points. We are mapping the points as
shown in the following:

[ 22 ]

Chapter 1

To get the transformation matrix, we have a function called getAffineTransform


in OpenCV. Once we have the affine transformation matrix, we use the warpAffine
function to apply this matrix to the input image.
Following is the input image:

If you run the preceding code, the output will look something like this:

[ 23 ]

Applying Geometric Transformations to Images

We can also get the mirror image of the input image. We just need to change the
control points in the following way:
src_points = np.float32([[0,0], [cols-1,0], [0,rows-1]])
dst_points = np.float32([[cols-1,0], [0,0], [cols-1,rows-1]])

Here, the mapping looks something like this:

If you replace the corresponding lines in our affine transformation code with these
two lines, you will get the following result:

[ 24 ]

Chapter 1

Projective transformations
Affine transformations are nice, but they impose certain restrictions. A projective
transformation, on the other hand, gives us more freedom. It is also referred to
as homography. In order to understand projective transformations, we need to
understand how projective geometry works. We basically describe what happens to
an image when the point of view is changed. For example, if you are standing right
in front of a sheet of paper with a square drawn on it, it will look like a square. Now,
if you start tilting that sheet of paper, the square will start looking more and more
like a trapezoid. Projective transformations allow us to capture this dynamic in a nice
mathematical way. These transformations preserve neither sizes nor angles, but they
do preserve incidence and cross-ratio.
You can read more about incidence and cross-ratio at http://
en.wikipedia.org/wiki/Incidence_(geometry) and
http://en.wikipedia.org/wiki/Cross-ratio.

Now that we know what projective transformations are, let's see if we can extract
more information here. We can say that any two images on a given plane are related
by a homography. As long as they are in the same plane, we can transform anything
into anything else. This has many practical applications such as augmented reality,
image rectification, image registration, or the computation of camera motion between
two images. Once the camera rotation and translation have been extracted from an
estimated homography matrix, this information may be used for navigation, or to
insert models of 3D objects into an image or video. This way, they are rendered with
the correct perspective and it will look like they were part of the original scene.
Let's go ahead and see how to do this:
import cv2
import numpy as np
img = cv2.imread('images/input.jpg')
rows, cols = img.shape[:2]
src_points = np.float32([[0,0], [cols-1,0], [0,rows-1], [cols1,rows-1]])
dst_points = np.float32([[0,0], [cols-1,0], [int(0.33*cols),rows-1],
[int(0.66*cols),rows-1]])
projective_matrix = cv2.getPerspectiveTransform(src_points, dst_
points)
img_output = cv2.warpPerspective(img, projective_matrix, (cols,rows))
cv2.imshow('Input', img)
cv2.imshow('Output', img_output)
cv2.waitKey()
[ 25 ]

Applying Geometric Transformations to Images

If you run the preceding code, you will see a funny looking output like the
following screenshot:

What just happened?


We can choose four control points in the source image and map them to the destination
image. Parallel lines will not remain parallel lines after the transformation. We use a
function called getPerspectiveTransform to get the transformation matrix.
Let's apply a couple of fun effects using projective transformation and see what they
look like. All we need to do is change the control points to get different effects.
Here's an example:

[ 26 ]

Chapter 1

The control points are as shown next:


src_points = np.float32([[0,0], [0,rows-1], [cols/2,0],
[cols/2,rows-1]])
dst_points = np.float32([[0,100], [0,rows-101], [cols/2,0],
[cols/2,rows-1]])

As an exercise, you should map the above points on a plane and see how the points
are mapped (just like we did earlier, while discussing Affine Transformations). You
will get a good understanding about the mapping system, and you can create your
own control points.

Image warping
Let's have some more fun with the images and see what else we can achieve.
Projective transformations are pretty flexible, but they still impose some restrictions
on how we can transform the points. What if we want to do something completely
random? We need more control, right? As it so happens, we can do that as well. We
just need to create our own mapping, and it's not that difficult. Following are a few
effects you can achieve with image warping:

[ 27 ]

Applying Geometric Transformations to Images

Here is the code to create these effects:


import cv2
import numpy as np
import math
img = cv2.imread('images/input.jpg', cv2.IMREAD_GRAYSCALE)
rows, cols = img.shape
#####################
# Vertical wave
img_output = np.zeros(img.shape, dtype=img.dtype)
for i in range(rows):
for j in range(cols):
offset_x = int(25.0 * math.sin(2 * 3.14 * i / 180))
offset_y = 0
if j+offset_x < rows:
img_output[i,j] = img[i,(j+offset_x)%cols]
else:
img_output[i,j] = 0
cv2.imshow('Input', img)
cv2.imshow('Vertical wave', img_output)
#####################
# Horizontal wave
img_output = np.zeros(img.shape, dtype=img.dtype)
for i in range(rows):
for j in range(cols):
offset_x = 0
offset_y = int(16.0 * math.sin(2 * 3.14 * j / 150))
if i+offset_y < rows:
img_output[i,j] = img[(i+offset_y)%rows,j]
else:
img_output[i,j] = 0
cv2.imshow('Horizontal wave', img_output)
#####################

[ 28 ]

Chapter 1
# Both horizontal and vertical
img_output = np.zeros(img.shape, dtype=img.dtype)
for i in range(rows):
for j in range(cols):
offset_x = int(20.0 * math.sin(2 * 3.14 * i / 150))
offset_y = int(20.0 * math.cos(2 * 3.14 * j / 150))
if i+offset_y < rows and j+offset_x < cols:
img_output[i,j] = img[(i+offset_y)%rows,(j+offset_x)%cols]
else:
img_output[i,j] = 0
cv2.imshow('Multidirectional wave', img_output)
#####################
# Concave effect
img_output = np.zeros(img.shape, dtype=img.dtype)
for i in range(rows):
for j in range(cols):
offset_x = int(128.0 * math.sin(2 * 3.14 * i / (2*cols)))
offset_y = 0
if j+offset_x < cols:
img_output[i,j] = img[i,(j+offset_x)%cols]
else:
img_output[i,j] = 0
cv2.imshow('Concave', img_output)
cv2.waitKey()

[ 29 ]

Applying Geometric Transformations to Images

Summary
In this chapter, we learned how to install OpenCV-Python on various platforms.
We discussed how to read, display, and save images. We talked about the importance
of various color spaces and how we can convert between multiple color spaces. We
learned how to apply geometric transformations to images and understood how to use
those transformations to achieve cool geometric effects. We discussed the underlying
formulation of transformation matrices and how we can formulate different kinds of
transformations based on our needs. We learned how to select control points based on
the required geometric transformation. We discussed about projective transformations
and learned how to use image warping to achieve any given geometric effect. In the
next chapter, we are going to discuss edge detection and image filtering. We can apply
a lot of visual effects using image filters, and the underlying formation gives us a lot of
freedom to manipulate images in creative ways.

[ 30 ]

Get more information OpenCV with Python By Example

Where to buy this book


You can buy OpenCV with Python By Example from the Packt Publishing website.
Alternatively, you can buy the book from Amazon, BN.com, Computer Manuals and most internet
book retailers.
Click here for ordering and shipping details.

www.PacktPub.com

Stay Connected:

You might also like