Professional Documents
Culture Documents
Dissertation Approval
The dissertation entitled
Date:
Place:
Examiner
Examiner
Guide
Chairman
Abstract
A video surveillance system is primarily designed to track key objects, or people exhibiting suspicious behavior as they move from one position to another and record it for possible future use.
Object tracking is an essential part of surveillance systems. As part of this project, an algorithm
for object tracking in video based on image segmentation and blob detectection and identification was implemented on Texas Instruments (TIs) TMS320DM6437 Davinci multi media
processor. Using back ground substration, all objects present in the image can be detected irrespective of they are moving or not. With the help of image segmantation, the substracted image
is filteredout and free from salt pepper noise. The segmented image is processed for decting and
identifying the blobs presents, which is going to be tracked. The object tracking is carried out
by feature extraction and center of mass calculation in feature space of the image segmentation
results of successive frames. Consequently, this algorithm can be applied to multiple moving
and still objects in the case of a fixed camera.
In this project we develop and demonstrate a framework for real-time implementation
of image and video processing algorithms such as object tracking and image inversion using
DM6437 processor. More specifically we track single object and two object present in the
scene captured by a CCD camera that acts as the video input device and output is displayed
in LCD display. The tracking happens in real-time consuming 30 frames per second (fps) and
is robust to background and illumination changes. The performance of single object tracking
using background substraction and blob detection was very efficent in speed and accuracy as
compared to a PC (Matlab) implementation of a similar algorithm. Execution time for different
blocks of single object tracking were estimated using the profiler and accuracy of the detection
is varified using the debuger provided by TI code composer studio (CCS). We demonstrate
that the TMS320DM6437 processor provides at least ten-times speed-up and is able to track a
moving object in real-time.
ii
Contents
Abstract
ii
List of Figures
List of Tables
vi
List of Abbreviations
. . . . . . . . . . .
1.1.1
1.1.2
1.1.3
1.1.4
1.1.5
1.2
Video Preview: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.3
Image Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
1.4
18
1.4.1
18
1.4.2
Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
23
1.5.1
25
1.5.2
26
1.5.3
Support Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
1.5.4
27
1.5
. . . . . . . . . . . . . . . . . . . . . . . . . . .
iii
Video and Image Processing Algorithms Code Work Flow Diagrams and Profiling 28
2.1
28
2.2
32
iv
33
List of Figures
1.1
1.2
1.3
1.4
1.5
1.6
13
1.7
Video Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
1.8
. . . . . . . . . . . . . . . . . . . . . . . . . . .
14
1.9
14
15
15
16
17
19
20
24
2.1
29
2.2
30
2.3
31
List of Tables
1.1
vi
23
Chapter 1
Video and Image Processing Algorithms
on TMS320DM6437
Software development tool Code Composer Studio (CCS) is an integrated development environment. The development system works in two modes, Simulator and Emulator. In the case of
the emulator CCS in connected to target board through an embedded emulation interface. CCS
not only allows the generation, compilation, assemble and link the source files but also give access to debugging through profiling, software & hardware breakpoints, direct access to memory
and even control registers. CCS also defines the memory map and write different section of
code into these memory blocks.
Texas Instruments (TI) also provides a collection of optimized image/video processing
functions (IMGLIB). These library functions include C-callable, assembly optimized image/video
processing routines. These functions are typically used in computationally intensive real-time
applications where optimal execution speed is critical [24].
Finally for fast implementation one can use Matlab /Simulink. A Target support Library
for TC6 is available for use with the TIDM6437. Although such an implementation is neither
optimized nor efficient, it is good for the scenarios of algorithm validation and proof of concepts.
1.1
This figure, which has 4 different modules. The first part is Video Recoder, input video is
captured in CCD Camera then call to FVID video driver then read that video frame form input
buffer then encode that video frame and write the encoded video data to a file. The second
1
module of this figure is video play back, in this case video file read from hard drive and decode
that file and put into the output buffer for display using the video output. Third module is
video loop through with encoder and decoder, in this module input video is captured in CCD
Camera then call to FVID video driver then read that video frame form input buffer then encode
that video frame then write encoded data to intermediate file, then decode the frame and write
decoded data to output buffer for Video display. Last module is video preview. In this module
input video is captured in CCD Camera then a call to FVID video driver reads that video frame
form input buffer and then the input buffer is passed to the output buffer for Video display as
shown in figure 5.1.
1.1.1
Video Capture application uses the EPSI APIs for VPFE to capture raw video frames from the
video input device CCD camera.The frames captured are stored in a file.The control flow of the
video capture application is shown in Figure as shown in figure. This application uses only the
EPSI APIs for the video capture. The following is the sequence of operations:
1. First initialize video capture device using VPFE_open(). The device is configured with
help of a set of initialization parameters
2. we can configure the video capture device using VPFE_control(). The video device would
have been initialized in Step 1, if any other configuration required then that can handled
2
by VPFE_control().
3. Now we will capture a video frame using VPFE_getBuffer(). This function dequeues a
buffer from the VPFE buffer queue which contains the video frame captured.
4. After that write the video frame into a file using FILE_write(). The actual write time will
depend on the data transfer rate of the target media.
5. Return the buffer back to the VPFE buffer queue using VPFE_returnBuffer().
6. Check if more frames need to be captured. If yes, go to Step 3.
7. Close the VPFE device using VPFE_close()
The total flow is explained in video control application, video encode control flow and video
encode pseudo code. video capture pseudo code, video capture control flow,video capture application is shown in figure figure 5.2, 5.3, 5.4 respectable.
VPFE_Params vpfeParams;
void APP_videoCapture(int numFramesToCapture)
{
int nframes=0;
char *frame=NULL;
/* Initialize VPFE driver */
vpfeHdl = VPFE_open(&vpfeParams);
while (nframes++ < numFramesToCapture)
{
VPFE_getBuffer(vpfeHdl, frame);
FILE_write(fileHdl, frame, FRAME_SIZE);
VPFE_returnBuffer(vpfeHdl, frame);
4
}
VPFE_close(vpfeHdl);
}
1.1.2
Video display application uses the EPSI APIs for VPBE to display the raw video frames read
from the video input file .The stored video frames are read by file read and place it into video
displaying device by using EPSI APIs for VPBE. This application is similar to Demo 1 replacing
captureing device to displaying device, we will perform read form file instead of write to the
file and VPFE to VPBE driver.
1.1.3
Video encoder application needs to capture video frames from the Video Processing Front End
(VPFE), pass the frames to a video encoder as input and store the encoded frames to a file.
Depending on the use of the application might transmit the encoded frames over the network
or the file is to video decoder as input and send the decoded output file to VPBE for displaying
video data, Which is explained in video control application, video encode control flow and
video encode pseudo code. The video control application, video encode control flow, video
encode pseudo code is shown in figure figure 5.5, 5.6, 5.7 respectable.
void APP_videoEncode(int numFramesToCapture)
{
int nframes=0;
5
char *frame=NULL;
/******************************
* Creation Phase
*******************************/
/* Initialize VPFE driver */
vpfeHdl = VPFE_open(&vpfeParams);
/* Initialize Codec Engine */
engineHdl = Engine_open(S
Sencode, NULL, NULL);
/* Initialize Video Encoder */
videncHdl = VIDENC_create(engineHdl,
S
Sh264enc,
&videncParams);
/* Configure Video Encoder */
VIDENC_control(videncHdl, XDM_SETPARAMS, &videncDynParams,
&videncStatus);
/* Initialize file */
fileHdl = FILE_open(
S
Stest.enc,
S
Sw);
/******************************
* Execution Phase
*******************************/
while (nframes++ < numFramesToCapture)
{
VPFE_getBuffer(vpfeHdl, frame);
VIDENC_process(videncHdl, &inbufDesc, &outbufDesc,
&inArgs, &outArgs);
VPFE_returnBuffer(vpfeHdl, frame);
FILE_write(fileHdl, outBufDesc.bufs[0],
outArgs.bytesGenerated);
}
/******************************
* Deletion Phase
*******************************/
VPFE_close(vpfeHdl);
VIDENC_delete(videncHdl);
Engine_close(engineHdl);
FILE_close(fileHdl);
}
1.1.4
Read complete frames from in(put Video file ), encode, decode, and write to out(output video
file ). This app expects the encoder to provide 1 buf in and get 1 buf out, and the buf sizes of the
in and out buffer must be able to handle NSAMPLES bytes of data. The video copy encoder
uses DMA to copy from the input buffer, and it uses DMA to copy to the encoded buffer. Therefore, we must write back the cache before calling the process function, and we must invalidate
the cache after calling the process function.The complete flow is shown in figure 5.10.
1.1.5
Read form input buffer store in fh_in then encode fh_in file then store it to fh_enc .then decode
fh_en file and store it in fh_out.
Read complete frames from fh_in and put it into input buffer, then prepare "per loop"
buffer descriptor settings for input buffer, the encode the frame, Since the input buffer was
filled by the CPU (via fread), we must write back the cache for the subsequent DMA read by
the video encoder. then write encoded data to intermediate file, then decode the frame and write
decoded data to file fh_out. The complete flow is shown in figure 5.9.
"open"
Step 3: work flow of the project is like main Function will call to CERuntime_init() to initialize
the CE environment then call TSK_create thread for calling smain() function. This function
takes command line input and and work according to that input. "FXn " is alise name for xdc
function
CERuntime_init();
TSK_create((Fxn)smain, &attrs, argc, argv)
CERuntime_init() : This is a Codec Engine function, which Provides wide initialization of
the Codec Engine Runtime. This function is consist of CERuntime_exit(Void), which is used
for finalizes the CE module used in the current configuration and CERuntime_init(Void), This
function must be called prior to using any Codec Engine APIs and it initializes all Codec Engine
modules used in the current configuration.
Step 4: In smain(Int argc, String argv[])function, first we declare engine handle and VISA
handler for video encoder and decoder. i.e.
Engine_Handle ce = NULL;
VIDDEC_Handle dec = NULL;
VIDENC_Handle enc = NULL;
8
where VIDDEC_Handle and VIDENC_Handle are the VISA handler for encoder and decoder.
Then declare file type pointer for input,output and encoder file. Check argc, whether it is command line input or from file.
FILE *fh_in = NULL;
FILE *fh_enc = NULL;
FILE *fh_out = NULL;
Step 6: if argc is one or zero then there is no command line input so input, encoded and output
file is read form specified file location.
if (argc <= 1) {
/* Input file is 10 raw YUV CIF frames */
inFile = "..\\data\\akiyo_10frames_qcif_in.yuv";
encodedFile = "..\\data\\akiyo_10frames_qcif.h264";
outFile = "..\\data\\akiyo_10frames_qcif_out.yuv";
}
if argc is not equal 4 then we will an error otherwise corresponding assignment is takes place.
else if (argc != 4) {
fprintf(stderr, usage, argv[0]);
exit(1);
}
else {
progName = argv[0];
inFile = argv[1];
encodedFile = argv[2];
outFile = argv[3];
}
Step 7:Now we will start a start DSP Engine.Engine_open: This is a CE APIs used for
open a codec instance. An engine may be opened more than once, each open returns a unique
handle that can be used to create codec instances or get status of server.
9
10
*/
Sobel operator and Image compression were carried out on DaVinci digital media processor
DM6437. The input was captured using a CCD camera and output is displayed in LCD display.
Blob detection and identification was very slow due to high computational complexity due limiting speed of processor ,coding style and the algorithm processes of multiple bolb detection,
identification and centroid calculation.Execution time for different blocks of single object tracking were estimated using the profiler and acquricy of the detection is test by debuger provided
by TI code composer studio (CCS).
There are two approches for video processing algorithms on TMS320DM6437 : the Real
time approach by using CCD camera and Display and second one is that we can perform this
by using read file from hard drive and process( ENC DEC) and also other video process it
and store that file into hard drive . Second case no need for configure capture and display
channel and checking PAL or NTSC etc. First case is used real time video preview and video
Encoder/Decoder where as second one is used in video recorder or video copy or video encoder/decoder to/from file.
The working principle of video preview is explained in 3 steps.
Step 1:Read video data through VPFE, use EDMA to route through SCR into external
DDR memory as shown in figure 5.1. FVID_exchange(vpfeChan, bufp);
Step 2:Transfer raw video data from DDR memory into L1 Data memory using EDMA.CPU
will Process L1 Video data by using the programs in L1 program memory then store the
processed video data to L1 memory and store result back to DDR using EDMA as shown
in figure 5.2.
VIDENC_process(videncChan, inbuf, outbuf, inarg, outarg);
Step 3: Read video data from external DDR memory, use EDMA to route through SCR,
Put video data into Display through VPBE as shown in figure 5.3.
FVID_exchange(hGioVpbeVid0, &frameBuffPtr);
12
13
14
15
1.2
Video Preview:
Read video data form CCD camera through VPFE, use EDMA to route through SCR into external DDR memory. Transfer raw video data from DDR memory into L1 Data memory using
EDMA.CPU will Process L1 Video data. Store the processed video data to L1 memory and store
result back to DDR using EDMA.Read video data from external DDR memory, use EDMA
to route through SCR, Put video data into Display through VPBE and its implementation on
DM6437 is shown in figure 5.4.
1.3
Image Inversion
Read video data through VPFE, use EDMA to route through SCR into external DDR memory.
Transfer raw video data from DDR memory into L1 Data memory using EDMA. CPU will
Process L1 Video data by using the image inversion programs which is invert the gray label of
the video frame. which can be performed by subtracting gray label value from maximum gray
value(255). Store the processed video data to L1 memory and store result back to DDR using
EDMA.Read video data from external DDR memory, use EDMA to route through SCR, Put
video data into Display through VPBE and its implementation on DM6437 is shown in figure
5.5.
Read complete frames from in buffer pointer, image invertion code, encode, decode, and
write to out buffer pointer.
16
by taking frameBuffPtr->
frame.frameBufferPtr **
image_inverse( &src[0]);
/* encode the frame */
status = VIDENC_process(enc, &inBufDesc, &encodedBufDesc,
&encInArgs,&encOutArgs);
/* decode the frame */
status = VIDDEC_process(dec, &encodedBufDesc, &outBufDesc,
&decInArgs,&decOutArgs);
/* display the video frame */
FVID_exchange(hGioVpbeVid0, &frameBuffPtr);
we can do using this code also, Read complete frames from in buffer pointer, image invertion
code and write to out buffer pointer.
/* grab a fresh video input frame */
FVID_exchange(hGioVpfeCcdc, &frameBuffPtr);
17
by taking frameBuffPtr->
frame.frameBufferPtr */
image_inverse( &src[0]);
/* display the video frame */
FVID_exchange(hGioVpbeVid0, &frameBuffPtr);
1.4
1.4.1
The block diagram of different steps for implementing JPEG image compression algorithm on
TMS320DM6437 platform is shown in figure 6.1.
18
19
YUV format. Here, only Y components were selected for further processing. U and V
components were discarded for effectively converting the YUV image to the Grey scale,
in which U and V components are sub sampled to be present in the interleaved fashion
between every Y components. It is known as packed YUV color format, which is shown
below in figure 2.2.1.1. In packed YUV color format every other byte is Y and every
fourth byte is either U or V in the format of Y U Y V Y U Y V as shown in figure 6.2.
.
3. Applying DCT and IDCT on 8x8 Grey Scale Blocks
After converting the entire image from YUV format to Grey Scale format, the DCT is applied sequentially on individual 8X8 grey scale block in order to convert the image from
spatial domain into the frequency domain. The equation for DCT explained in previously
can be used to apply the DCT on 8x8 blocks but they require lots of mathematical computation and hence higher execution time, which in turn degrades the performance of
the system. Therefore, we implemented for another popular approach to implement DCT
in this project.
N 1 N 1
(2i + 1)u
C(u) C(v) X X
(2j + 1)v
p
=p
(xb )i,j cos
cos
,
2N
2N
N/2 N/2 i=0 j=0
u=0
u>0
In this approach, the image matrix of size 8x8 is first multiplied with cosine transform
matrix and then with its transposed matrix of same size in order to achieve the DCT 8x8
matrix in order to convert the entire image from spatial domain to frequency domain. The
transpose matrix is generated by transposing rows and columns of the cosine transform
matrix.
Every 8x8 DCT matrix was divided by the quantized matrix of the same size in order
to quantize the high frequency DCT coefficients to zero. By dividing every 8x8 matrix
21
with quantization matrix, most of the high frequency DCT coefficients with pixel values smaller than quantization thershold corresponding quantization matrix values were
reduced to zeros. This step contributes most to the lossy ness of the JPEG algorithm as
the finer detail of the image is lost due to these truncations of the high frequency DCT
coefficients to zeros. The original values cannot be restored back in the decompression
1.4.2
Profiling
Profiling is used to measure code performance and make sure for efficient use of the DSP targets
resources during debug and development sessions. Basically profiling is carried out by profiler,
which is inbuilt in code composer studio. Using profiler, developers can easily profile all C/C++
functions in their application for instruction cycles or other events such as cache misses or hits,
pipeline stalls and branches. Profiler ranges is used to concentrate efforts on high-usage areas
of code during optimization, Developers can generate one-tuned code. Profiling is available for
ranges of assembly, C++ or C code in any combination. To increase productivity, all profiling
facilities are available throughout the development cycle.
Profiling is used on different function of image compression Algorithm JPEG standard and
the time taken to execute each function is measured by considering its inclusive and exclusive
cycle, access count and processor clock frequency. The whole profiling process is carried out
by following steps
22
Access count
Incl Cycle
Excl cycle
multiply1
18
101009506
34331926
8.016
multiply2
18
199555672
68702923
15.837
multiply3
31110626
10630951
7.407
multiply4
58176765
20066440
16.621
multiply_coefficient
12
12933984
5291010
1.539
multiply_coefficient_reverse
11
68746
68746
0.009
zig_zag_pattern
11
49957
49957
0.006
invzig_zag_pattern
11
48280
48280
0.006
rle_invrle
108
1719464
1719464
0.023
1.5
Code Composer Studio (CCS) provides an Integrated Development Environment (IDE) to incorporate the software tools used to develop applications targeted to Texas Instruments Digital
Signal Processors. CCS includes tools for code generation,such as a C compiler, an assembler,
and a linker. It has graphical capabilities and supports real-time debugging. It provides an easyto-use software tool to build and debug programs.
The C compiler compiles a C source program with extension .c to produce an assembly source file with extension .asm.The assembler assembles an.asm source file to produce a
machine language object file with extension.obj. The linker combines object files and object
libraries as input to produce an executable file with extension.out. This executable file represents a linked common object file format (COFF), popular in Unix-based systems and adopted
by several makers of digital signal processors.
To create an application project, one can add the appropriate files to the project. Compiler/linker options can readily be specified. A number of debugging features are available,
including setting breakpoints and watching variables; viewing memory, registers, and mixed C
and assembly code; graphing results; and monitoring execution time. One can step through a
program in different ways (step into, over, or out).
23
Real-time analysis can be performed using real-time data exchange (RTDX). RTDX allows
for data exchange between the host PC and the target DVDP, as well as analysis in real time
without stopping the target. Key statistics and performance can be monitored in real time.
Through the joint team action group (JTAG), communication with on-chip emulation support
occurs to control and monitor program execution. The DM6437 EVM board includes a JTAG
interface through the USB port.
CCS provides a single IDE to develop an application by offering following features:
Programming DSP using C/C++
Ready-to-use built-in functions for video and image processing
Run-time debugging on the hardware
Debugging an application using software breakpoints
Some of the steps involved in developing a successful application include creation of a
project environment, development of code using C or C++, linking appropriate library functions,
compiling the code, converting it into an assembly language code and downloading it onto the
DM6437 platform using JTAG interface. The image of the IDE for CCS is shown below:
Combining all the features such as advanced DSP core, interfaces and on-board memory,
along with the advantages of CCS, TMS320DM6437 was considered an obvious choice for this
project. The DVDP stood out as an excellent platform for this project.
24
1.5.1
Use the USB cable to connect the DVDSK board to the USB port on the PC. Use the 5-V
power supply included with the DVDSK package to connect to the +5-V power connector on
the DVDSK to turn it on. Install CCS with the CD-ROM included with the DM6437 target
support file, preferably using the c:\CCSv3.3 structure . The CCS icon should be on the desktop
as CCS and is used to launch CCS.The code generation tools (C compiler, assembler, linker)
are used with CCS version 3.x. CCS provides useful documentations included with the DVDSK
package on the following (see the Help icon):
1. Code generation tools (compiler, assembler, linker, etc.).
2. Tutorials on CCS, compiler, RTDX.
3. DSP instructions and registers.
4. Tools on RTDX, DSP/basic input/output system (DSP/BIOS), and so on.
There are also examples included with CCS within the folder c:\C6416\examples. They
illustrate the board and chip support library files, DSP/BIOS, and so on. CCS Version 3.x was
used to build and test the examples included in this book.A number of files included in the
following subfolders/directories within c:\C6416 (suggested structure during CCS installation)
can be very useful:
1. myprojects: a folder supplied only for your projects.
2. bin: contains many utilities.
3. docs: contains documentation and manuals.
4. c6000\cgtools: contains code generation tools.
5. c6000\RTDX: contains support files for real-time data transfer.
6. c6000\bios: contains support files for DSP/BIOS.
7. examples: contains examples included with CCS.
8. tutorial: contains additional examples supplied with CCS.
25
1.5.2
Your working project floder with a number of files with different extensions. They include:
1. file.pjt: to create and build a project named file
2. file.c: C source program
3. file.asm: assembly source program created by the user, by the C compiler, or by the linear
optimizer
4. file.sa: linear assembly source program.The linear optimizer uses file.sa as input to produce an assembly program file.asm
5. file.h: header support file.
6. file.lib: library file, such as the run-time support library file rts6700.lib
7. file.cmd: linker command file that maps sections to memory
8. file.obj: object file created by the assembler
9. file.out: executable file created by the linker to be loaded and run on the C6713 processor
10. file.cdb: configuration file when using DSP/BIOS
11. file.sbl,.tcl,.tco,.h62,.s62,.paf2,.map
1.5.3
Support Files
The following support files located in the folder support (except the library files) are used for
most of the examples and projects discussed in this book:
1. evmdm6437.gel: what is gel file and what is its use .
2. evmdm6437.c: contains functions to initialize the DSK, the codec, the serial ports, and
for I/O. It is not included with CCS.
3. evmdm6437.h: header file with function prototypes. Features such as those used to select
the mic input in lieu of line input (by default), input gain, and so on are obtained from
this header file (modified from a similar file included with CCS).
26
4. dm6437.cmd: sample linker command file. This generic file can be changed when using
external memory in lieu of internal memory.
5. Vectors_intr.asm: a modified version of a vector file included with CCS to handle interrupts. Twelve interrupts, INT4 through INT15, are available, and INT11 is selected
within this vector file.They are used for interrupt-driven programs.
6. Vectors_poll.asm: vector file for programs using polling.
7. rtsdm6437.lib,evmdm6437bsl.lib,evmdm6437csl.lib: run-time, board, and chip support
library files, respectively. These files are included with CCS and are locat.
check chip support is present in dm6437 or not.
1.5.4
27
Chapter 2
Video and Image Processing Algorithms
Code Work Flow Diagrams and Profiling
2.1
The Video Encoder Flow diagram is shown in figure 8.1, 8.2 and 8.3.
28
29
31
2.2
32
Chapter 3
Working code for multiple Object
Tracking on DM6437
/*
* ======== video_preview.c ========
*
*/
#include <psp_vpfe.h>
#include <psp_vpbe.h>
#include <fvid.h>
#include <psp_tvp5146_extVidDecoder.h>
#define STANDARD_NTSC 1
#define FRAME_BUFF_CNT 6
//*******************************************************
// USER DEFINED FUNCTIONS
34
//*******************************************************
/////////////////////////////////////////////////
//*******************************************************
// VARIABLE ARRAYS
//*******************************************************
35
////////////////////////
/*
* ======== main ========
*/
void main() {
EVMDM6437_DIP_init();
/* VPSS PinMuxing */
/* CI10SEL
- No CI[1:0]
*/
/* CI32SEL
- No CI[3:2]
*/
/* CI54SEL
- No CI[5:4]
*/
/* CI76SEL
- No CI[7:6]
*/
/* CFLDSEL
- No C_FIELD
*/
/* CWENSEL
- No C_WEN
*/
/* HDVSEL
*/
/* CCDCSEL
*/
/* AEAW
*/
/* VPBECKEN
- VPBECLK enabled
*/
/* RGBSEL
- No digital outputs
*/
/* CS3SEL
- LCD_OE/EM_CS3 disabled
*/
/* CS4SEL
- CS4/VSYNC enabled
*/
/* CS5SEL
- CS5/HSYNC enabled
*/
/* VENCSEL
- VCLK,YOUT[7:0],COUT[7:0] enabled */
/* AEM
&= (0x005482A3u);
|= (0x005482A3u);
/* PCIEN
0: PINMUX1 - Bit 0 */
return;
}
/*
* ======== video_preview ========
*/
37
*/
void video_preview(void) {
FVID_Frame *frameBuffTable[FRAME_BUFF_CNT];
FVID_Frame *frameBuffPtr;
GIO_Handle hGioVpfeCcdc;
GIO_Handle hGioVpbeVid0;
GIO_Handle hGioVpbeVenc;
int status = 0;
int result;
int i;
int standard;
int width;
int height;
vpfeCcdcConfigParams =
VID_PARAMS_CCDC_DEFAULT_D1;
PSP_VPBEOsdConfigParams vpbeOsdConfigParams =
VID_PARAMS_OSD_DEFAULT_D1;
PSP_VPBEVencConfigParams vpbeVencConfigParams;
standard = read_JP1();
= 720;
height = 576;
vpbeVencConfigParams.displayStandard = PSP_VPBE_DISPLAY_PAL_INTER
}
else {
38
width
= 720;
height = 480;
vpbeVencConfigParams.displayStandard = PSP_VPBE_DISPLAY_NTSC_INTE
}
vpfeCcdcConfigParams.height = vpbeOsdConfigParams.height = height;
vpfeCcdcConfigParams.width = vpbeOsdConfigParams.width = width;
vpfeCcdcConfigParams.pitch = vpbeOsdConfigParams.pitch = width * 2;
= PSP_VPFE_CCDC;
vpfeChannelParams.params = (PSP_VPFECcdcConfigParams*)&vpfeCcdcConfi
hGioVpfeCcdc = FVID_create("/VPFE0",IOM_INOUT,NULL,&vpfeChannelParam
status = (hGioVpfeCcdc == NULL ? -1 : 0);
}
= PSP_VPBE_VIDEO_0;
vpbeChannelParams.params = (PSP_VPBEOsdConfigParams*)&vpbeOsdConfigP
hGioVpbeVid0 = FVID_create("/VPBE0",IOM_INOUT,NULL,&vpbeChannelParam
status = (hGioVpbeVid0 == NULL ? -1 : 0);
}
if (status == 0) {
PSP_VPBEChannelParams vpbeChannelParams;
vpbeChannelParams.id
= PSP_VPBE_VENC;
hGioVpbeVenc = FVID_create("/VPBE0",IOM_INOUT,NULL,&vpbeChannelParam
status = (hGioVpbeVenc == NULL ? -1 : 0);
}
40
FVID_queue(hGioVpbeVid0, frameBuffTable[4]);
FVID_queue(hGioVpbeVid0, frameBuffTable[5]);
}
extract_uyvy ((frameBuffPtr->frame.frameBufferPtr));
copy_frame();
frame_substract();
//tracking();
write_uyvy ((frameBuffPtr->frame.frameBufferPtr));
41
/*
* ======== read_JP1 ========
* Read the PAL/NTSC jumper.
*
* Retry, as I2C sometimes fails:
*/
static int read_JP1(void)
{
int jp1 = -1;
42
{
I_u1[r][c]
I_y1[r][2*c]
I_v1[r][c]
I_u[r][c] ;
I_y[r][2*c]
43
/*
if(r > row1 && r < row2 && c > col1 && c < col2)
* (((unsigned char * )currentFrame)+ offset) =I_temp[r][c];
else
* (((unsigned char * )currentFrame)+ offset) = 0;
offset++;
* (((unsigned char * )currentFrame)+ offset) = 0x80;
offset++;
*/
}
}
}
void copy_frame()
{
int r, c;
44
= I_u1[r][c];
I_y[r][2*c]
= I_y1[r][2*c] ;
I_v[r][c]
= I_v1[r][c]
I_y[r][2*c+1] = I_y1[r][2*c+1] ;
}
}
}
void frame_substract()
{
int arr[1000];
int r, c,m,n,p,q,l,i,j,ix,jx,a,t;
int cent_x,cent_y,cent_z;
int centroid_x[10000],centroid_y[10000];
int clow,chigh,rlow,rhigh;
int rtemp,ctemp;
int count,LL,LH,t_area;
int flag,ind;
int iblob=0;
int k=1;
for(t=0;t<1000;t++)
45
arr[t]=0;
- I_u2[r][c] ;
I_y3[r][2*c]
= I_y1[r][2*c] - I_y2[r][2*c];
I_v3[r][c]
=I_v1[r][c]
I_y3[r][2*c+1] =
I_y1[r][2*c+1]- I_y2[r][2*c+1];
/*//I_u[r][c]= I_u1[r][c]
I_y[r][2*c]
-I_v2[r][c] ;
- I_u[r][c] ;
= I_y1[r][2*c] - I_y[r][2*c];
//I_v[r][c]
=I_v1[r][c]
I_y[r][2*c+1] =
-I_v[r][c] ;
I_y1[r][2*c+1]- I_y[r][2*c+1];
*/
}
for(r = 0; r < 480; r++)
{
for(c = 0; c < 360; c++)
{
I_u2[r][c]
= I_u1[r][c];
I_y2[r][2*c]
= I_y1[r][2*c] ;
I_v2[r][c]
= I_v1[r][c]
I_y2[r][2*c+1] = I_y1[r][2*c+1] ;
}
46
{
I_u3[m][n]
= 128 ;
I_y3[m][2*n]
= 16 ;
I_v3[m][n]
= 128
I_y3[m][2*n+1] = 16;
}
}
/*
for (i=0;i<LAST_ROW;i++){
for (j=0;j<LAST_COL;j++){
I_y4[i][j]=0;}}
for (i=0;i<LAST_ROW;i++){
for (j=0;j<LAST_COL;j++){
if(I_y3[i][j]!=0 && I_y4[i][j]==0 ){
clow=j;
rlow=i;
flag=1;
47
chigh=clow+1;
rhigh=rlow+1;
while(flag==1){
flag=0;
}; //end of while
count=0;
48
LL=rhigh-rlow+1;
LH=chigh-clow+1;
t_area=LL*LH;
%for jx
for ix
}//end%t_area
49
//tracking
cent_x=0;
cent_y=0;
cent_z=0;
ind=1;
for( a=1;a<=4*iblob;a=a+4)
{
for (m=arr[a];m<=arr[a+2];m++)
{
for (n=arr[a+1];n<=arr[a+3];n++)
{
//for(m = 0; m < 480; m++)
// {
//for(n = 0; n < 360; n++)
//{
<4 || I_y3[m][n]>200))
{
I_u3[m][n]
= 128 ;
I_y3[m][2*n]
= 16 ;
I_v3[m][n]
= 128
I_y3[m][2*n+1] = 16;
50
else
{
cent_x= cent_x + m ;
cent_y= cent_y + n ;
cent_z= cent_z + 1 ;
}
//centroid_x= (cent_x/cent_z);
//centroid_y=(cent_y/cent_z);
} //end of n
} //end of m
centroid_x[ind]= (cent_x/cent_z);
centroid_y[ind]=(cent_y/cent_z);
ind=ind+1;
}//end of k
//drawing rectangle
for(l=1;l<=ind;l++)
{
for(p =centroid_x[l]-10 ; p < centroid_x[l]+10; p++)
{
for(q = centroid_y[l]-10; q < centroid_y[l]+10; q++)
{
= 255;
51
I_y[p][2*q]
= 255;
I_v[p][q]
= 255;
I_y[p][2*q+1] = 255;
}
else
{
I_u[p][q]
= I_u[p][q];
I_y[p][2*q]
= I_y[p][2*q];
I_v[p][q]
= I_v[p][q];
I_y[p][2*q+1] = I_y[p][2*q+1];
}
}
}
}//end of l loop
*/
}//end of function
void tracking()
{
int r, c,m,n,p,q;
int cent_x,cent_y,cent_z;
int centroid_x,centroid_y;
52
int dim_x,dim_y;
cent_x=0;
cent_y=0;
cent_z=0;
{
I_u3[m][n]
= 128 ;
I_y3[m][2*n]
= 16 ;
I_v3[m][n]
= 128
I_y3[m][2*n+1] = 16;
}
else
{
cent_x= cent_x + m ;
cent_y= cent_y + n ;
cent_z= cent_z + 1 ;
}
53
centroid_x= (cent_x/cent_z);
centroid_y=(cent_y/cent_z);
}
}
= 255;
I_y[p][2*q]
= 255;
I_v[p][q]
= 255;
I_y[p][2*q+1] = 255;
}
else
{
I_u[p][q]
= I_u[p][q];
I_y[p][2*q]
= I_y[p][2*q];
I_v[p][q]
= I_v[p][q];
I_y[p][2*q+1] = I_y[p][2*q+1];
}
}
}
54
55
References
[1] Takashi morimoto, Osama kiriyama, youmei harada, Object tracking in video images
based on image segmentation and pattern matching, IEEE conference proceedings, vol no
05, page no, 3215-3218, 2005
[2] Yamaoka, K.; Morimoto, T.; Adachi, H.; Koide, T.; Mattausch, H.J.; , Image segmentation
and pattern matching based FPGA/ASIC implementation architecture of real-time object
tracking, Design Automation, 2006. Asia and South Pacific Conference on , vol., no., pp. 6
pp., 24-27 Jan. 2006 doi: 10.1109/ASPDAC.2006.1594678
[3] Qiaowei Li; Shuangyuan Yang; Senxing Zhu ,
proaches, " Computer Science and Automation Engineering (CSAE), 2011 IEEE International Conference on, vol.2, no., pp.465-468, 10-12 June 2011
[4] Patra, D.; Santosh Kumar, K.; Chakraborty, D.; ,Object Tracking in Video Images Using
Hybrid Segmentation Method and Pattern Matching,India Conference (INDICON), 2009
Annual IEEE , vol., no., pp.1-4, 18-20 Dec. 2009 doi: 10.1109/INDCON.2009.5409361.
[5] :Watve, A.K., Object tracking in video scenes, M.Tech. seminar, IIT, Kharagpur, India,2010.
[6] Uy, D.L.,An algorithm for image clusters detection and identification based on color for an
autonomous mobile robot, Science and Education, Oak Ridge Inst., TN, DOE/OR/00033
T670, 1996
[7] Bochem, A.; Herpers, R.; Kent, K.B.; , Acceleration of Blob Detection within Images in
Hardware, University of New Brunswick, Dec 15, 2009,pp 1-37.
[8] Kaspers, A.; ,Blob Detection, Biomedical Image Sciences, Image Sciences Institute,
UMC Utrecht, May 5, 2011.
56
[9] Gupta.M., Cell Identification by Blob Detection,UACEE International Journal of Advances in Electonics Engineering Volume 2 : Issue 1,2012.
[10] Hinz, S.; , Fast and subpixel precise blob detection and attribution, Image Processing,
2005. ICIP 2005. IEEE International Conference on , vol.3, no., pp. III- 457-60, 11-14 Sept.
2005 doi: 10.1109/ICIP.2005.1530427.
[11] A. R. Francois., Real-time multi-resolution blob track-ing, Technical Report IRIS-04423, Institute for Robotics and Intelligent Systems, University of South-ern California, July
2004.
[12] M. Mancas et al, Augmented Virtual Studio, Tech. rep. 4. 2008. Pp.: 1, 3.
[13] Dharamadhat, T.; Thanasoontornlerk, K.; Kanongchaiyos, P.; , Tracking object in video
pictures based on background subtraction and image matching, Robotics and Biomimetics,
2008. ROBIO 2008. IEEE International Conference on , vol., no., pp.1255-1260, 22-25 Feb.
2009 doi: 10.1109/ROBIO.2009.4913180.
[14] Piccardi, M., Background subtraction techniques: a review, Systems, Man and Cybernetics, 2004 IEEE International Conference on, vol.4, no., pp. 3099- 3104 vol.4, 10-13 Oct.
2004.
[15] Andrews,A., Targeting multiple objects in real time, B.E, thesis, University of Calgary,
Canada, October,1999.
[16] Saravanakumar, S.; Vadivel, A.; Saneem Ahmed, C.G., Multiple human object tracking
using background subtraction and shadow removal techniques, Signal and Image Processing
(ICSIP), 2010 International Conference on, vol., no., pp.79-84, 15-17 Dec. 2010.
[17] ZuWhan. K., Real time object tracking based on dynamic feature grouping with background subtraction, Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE
Conference on , vol., no., pp.1-8, 23-28 June 2008 doi: 10.1109/CVPR.2008.4587551
[18] Isard, M.; MacCormick, J.; , BraMBLe: a Bayesian multiple-blob tracker, Computer
Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on , vol.2,
no., pp.34-41 vol.2, 2001 doi: 10.1109/ICCV.2001.937594
57
[19] Gonzales, R.C., Woods,R.E., Digital Image Processing-Second Edition, Prentice Hall,
2002.
[20] Haralick, R.M., Shapiro, L. G., Computer and Robot Vision, Volume I, AddisonWesley, 1992, pp. 28-48.
[21] Castagno, R., Ebrahimi, T., Kunt, M.,Video Segmentation Based on Multiple Features
for Interactive Multimedia Applications, IEEE Transactions on Circuits and Systems for
Video Technology, Vol. 8, No. 5, pp.562-571, September 1998.
[22] Kenako, T., Hori, O.,Feature selection for reliable tracking using template matching, in
Proc. IEEE Intl. Conference on Computer Vision and Pattern Recognition (CVPR03), vol.
1, June 2003, pp. 796-802.
[23] Bochem, A.; Herpers, R.; Kent, K.B.; ,Hardware Acceleration of BLOB Detection
for Image Processing, Advances in Circuits, Electronics and Micro-Electronics (CENICS), 2010 Third International Conference on , vol., no., pp.28-33, 18-25 July 2010 doi:
10.1109/CENICS.2010.12.
[24] Mostafa, A., Mehdi, A., Mohammad, h., Ahmad, A., Object Tracking in Video Sequence Using Background Modeling", Australian Journal of Basic and Applied Sciences,
5(5): 967-974, 2011 In Proceedings of IEEE Workshop on Application of Computer Vision,
1998,
[25] Babu, R.V.; Makur, A., Object-based Surveillance Video Compression using Foreground Motion Compensation, " Control, Automation, Robotics and Vision, 2006. ICARCV
06. 9th International Conference on, vol., no., pp.1-6, 5-8 Dec. 2006
[26] Comaniciu, D.; Ramesh, V.; Meer, P., Real-time tracking of non-rigid objects using
mean shift, " Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on, vol.2, no., pp.142-149 vol.2, 2000.
[27] G.L. Foresti, A real-time system for video surveillance of unattended outdoor environments", IEEE Trans. Circuits and Systems for Vid. Tech., Vol. 8, No. 6, pp. 697?704, 1998.
[28] G.L. Foresti, Moving Target Detection and Classification from Real-Time Video", In
Proceedings of IEEE Workshop on Application of Computer Vision, 1998,
58
[29] Elbadri, M.; Peterkin, R.; Groza, V.; Ionescu, D.; El Saddik, A., Hardware support
of JPEG, " Electrical and Computer Engineering, 2005. Canadian Conference on, vol., no.,
pp.812-815, 1-4 May 2005
[30] Deng, M.; Guan, Q. ; Xu, S., Intelligent video target tracking system based on DSP,
" Computational Problem-Solving (ICCP), 2010 International Conference on, vol., no.,
pp.366-369, 3-5 Dec. 2010
[31] Liping, K.; Zhefeng, Z.; Gang, X., The Hardware Design of Dual-Mode Wireless Video
Surveillance System Based on DM6437, " Networks Security Wireless Communications
and Trusted Computing (NSWCTC), 2010 Second International Conference on, vol.1, no.,
pp.546-549, 24-25 April 2010
[32] Pescador, F.; Maturana, G.; Garrido, M.J.; Juarez, E.; Sanz, C., An H.264 video decoder
based on a DM6437 DSP, " Consumer Electronics, 2009. ICCE 09. Digest of Technical
Papers International Conference on, vol., no., pp.1-2, 10-14 Jan. 2009
[33] Wang, Q.; Guan, Q. ; Xu, S.; Tan, F., A network intelligent video analysis system based
on multimedia DSP, " Communications, Circuits and Systems (ICCCAS), 2010 International
Conference on, vol., no., pp.363-367, 28-30 July 2010
[34] Kim, C., Hwang, J.N., Object-based video abstraction for video surveillance system,
IEEE Trans. circuits and Systems for Video Technology y, vol. 12, no. 12, pp. 11281138,
Dec. 2002.
[35] Nishi, T., Fujiyoshi, H., Object-based video coding using pixel state analysis, in IEEE
Intl. Conference on Pattern Recognition, 2004.
[36] William K. Pratt, Digital Image Processing (second edition), John Wiley & Sons, New
York, 1991, ISBN 0-471-85766-1.
[37] Wallace, G.K., The JPEG still picture compression standard, " Consumer Electronics,
IEEE Transactions on, vol.38, no.1, pp.xviii-xxxiv, Feb 1992
[38] Seol,S. W., et al., An automatic detection and tracking system of moving objects using
double differential based motion estimation", Proc. of Int. Tech. Conf. Circ./Syst., Comput.
and Comms. (ITC-CSCC2003), page no. 260? 263, 2003
59
[39] Dwivedi, V., JPEG IMAGE COMPRESSION AND DECOMPRESSION WITH MODELING OF DCT COEFFICIENTS ON THE TEXAS INSTRUMENT VIDEO PROCESSING BOARD TMS320DM6437, Master of science, California State University, Sacramento, Summer 2010.
[40] Kapadia, P.,Car License Plate Recognition Using Template Matching Algorithm, Master Project Report, California State University, Sacramento,Fall 2010.
[41] Gohil, N. , Car License Plate Detection, Masters Project Report, California State University, Sacramento, Fall 2010.
[42] Texas Instruments Inc., TMS320DM6437 DVDP Getting Started Guide, Texas, July
2007.
[43] Texas Instrument Inc., TMS320DM6437 Digital Media Processor, Texas, pp 1-5, 211234, June 2008.
Zs
Guide, Literature Num[44] Texas Instruments Inc., TMS320C64x+ DSP Cache UserA
ber: SPRU862A, Table 1-6, Page 23, October 2006.
[45] Texas Instrument Inc., TMS320DM643x DMP Peripherals Overview Reference Guide,
pp 15-17,June 2007.
[46] Texas Instrument Inc., TMS320C6000 Programmers Guide, Texas, pp 37-84,March
2000.
c IP RGB to YCrCb Color-Space Converter, pp
[47] Xilinx Inc., The Xilinx LogiCORED
1-5, July 2010.
[48] Texas Instruments Inc., How to Use the VPBE and VPFE Driver on TMS320DM643x.
Dallas, Texas, November 2007.
[49] Texas Instrument Inc., TMS320C64X+ DSP Cache, User Guide, pp 14-26, February
2009.
[50] Texas Instruments technical Reference, TMS320DM6437 Evaluation Module, Spectrum Digital , 2006.
[51] Keith Jack, Video Demystified: A Handbook for the Digital Engineer, 4th Edition.
60
[52] B.I. (Raj) Pawate, Developing Embedded Software using DaVinci&OMAP Technology
[53] Al Bovik, Department of Electrical and Computer Engineering, UTA Texas, Handbook
of Image & Video Processing, Academic Press Series, 1999.
[54] Bonnie L. Stephens, Student Thesis on Image Compression Algorithms, California State
University, Sacramento, August 1996
[55] Berkeley Design Technology, Inc.,The Evolution of DSP Processors, World Wide Web,
http://www.bdti.com/articles/evolution.pdf, Nov. 2006.
[56] Berkeley Design Technology, Inc., Choosing a Processor: Benchmark and Beyond,
World Wide Web,http://www.bdti.com/articles/20060301_TIDC_Choosing.pdf, Nov. 2006.
[57] University of Rochester, DSP Architectures: Past, Present and Future, World Wide
Web, http://www.ece.rochester.edu/research/wcng/papers/CAN_r1.pdf, Nov. 2006.
[58] Steven W. Smith, The Scientist and Engineers Guide to Digital Signal Processing,
Second Edition, California Technical Publishing, 1999.
[59] Texas Instruments Inc., TMS320DM642 Technical Overview, Dallas, Texas, September 2002.
61
Acknowledgments
I express my sincere thanks and deep sense of gratitude to my supervisor Prof. V Rajbabu for
his invaluable guidance, inspiration, unremitting support,encouragement and for his stimulating
suggestions during in the preparation of this report. His persistence and inspiration during the
ups and downs in research, and his clarity and focus during the uncertainties, have been very
helpful to me. Without his continuous encouragement and motivation, the present work would
not have seen the light of day.
I acknowledge with thanks to all EI lab members and TI-DSP lab members, at IIT Bombay
who have directly or indirectly helped me throughout my stay in IIT. I Would also like to thank
the assistance provided by the department staff, central library staff and computer faculty staff.
I would like to express my sincere thanks to Mr. Ajay Nandoriya and Mr. K.S Nataraj for
their help and support during the project work.
The family members are of course, a source of faith and moral strength. I acknowledge
the shower of blessing and love of my parents, Mr. Rajiba Lochana Patro and Mrs. Uma Rani
patro, also Godaborish patro and Madhu sundan patro for their unrelenting moral supports in
difficult times.I wish to express my deep gratitude towards all of my friends and colleagues for
providing me constant moral support, their support makes my stay in institute pleasant. I have
enjoyed every moment that I spent with all of you.
And finally I am thankful to God in whom I trust.
Date: