You are on page 1of 9

Reference Independent Moving Object Detection: An

Edge Segment Based Approach

M. Ali Akber Dewan, M. Julius Hossain, and Oksam Chae*

Department of Computer Engineering, Kyung Hee University,


1 Seochun-ri, Kiheung-eup, Yongin-si, Kyunggi-do, South Korea, 449-701
dewankhu@gmail.com, mdjulius@yahoo.com, oschae@khu.ac.kr

Abstract. Reference update to adapt with the dynamism of environment is one


of the most challenging tasks in moving object detection for video surveillance.
Different background modeling techniques have been proposed. However, most
of these methods suffer from high computational cost and difficulties in
determining the appropriate location as well as pixel values to update the
background. In this paper, we present a new algorithm which utilizes three most
recent successive frames to isolate moving edges for moving object detection. It
does not require any background model. Hence, it is computationally faster and
applicable for real time processing. We also introduce segment based
representation of edges in the proposed method instead of traditional pixel
based representation which facilitates to incorporate an efficient edge-matching
algorithm to solve edge localization problem. It provides robustness against the
random noise, illumination variation and quantization error. Experimental
results of the proposed method are included in this paper to compare with some
other standard methods that are frequently used in video surveillance.

Keywords: Video surveillance, reference independent, chamfer matching,


distance image, motion detection.

1 Introduction
Automatic detection of moving objects is a challenging and essential task in video
surveillance. It has many applications in diverse discipline such as automatic video
monitoring system, intelligent transportation system, airport security system and so
on. Detail review on moving object detection algorithms can be found in [1] and [2].
Background subtraction based methods are the most common approaches that are
used for moving object detection. In these methods, background modeling is an
important and unavoidable part to accumulate the illumination and other changes in
the background scene for proper detection [3]. However, most of the background-
modeling methods are complex in computation and time-consuming for real time
processing [4]. Moreover, most of the time it suffers from poor performance due to
lack of compensation with the dynamism of background scene [5].
Edge based methods are robust against illumination change. In [6] and [7], edge
based methods are proposed for moving object detection which utilizes double edge

*
Corresponding author.

B. Apolloni et al. (Eds.): KES 2007/WIRN 2007, Part I, LNAI 4692, pp. 501–509, 2007.
© Springer-Verlag Berlin Heidelberg 2007
502 M.A.A. Dewan, M.J. Hossain, and O. Chae

maps. In [6], one edge map is generated from difference image of background and
current frame, In. Another edge map is generated from difference image of In and In+1.
Finally, moving edge points are detected by applying logical OR operation on these
two edge maps. However, due to illumination change and random noise [6] in
background scene, false edge may appear in the first edge map and hence causes false
detection in the final detection result. In [7], first edge map is computed from the
difference image of In-1 and In, and similarly second map is obtained from In, and In+1.
Finally, moving edges of In are extracted by applying logical AND operation on these
two edge maps. However, because of noise and illumination change, edge pixels of an
edge map may be displaced little bit as compared to previous one. So, exact matching
through AND operation extracts scattered edge pixels, which fails to represent reliable
shape of moving objects. Moreover, pixel based processing for moving edge detection
is not feasible in terms of computation. A pseudo-gradient based moving edge
extraction method is proposed in [8]. Though this method is computationally faster
but its background is not updated to take care of the situation when a moving object
stops its movement in the scene. In this situation, stopped object is continuously
detected as moving object. As no background update method is adopted in this
method, it is not much robust against illumination change. Additionally, this method
also suffers from scattered edge pixels of moving objects.

(a) (b) (c) (d)

Fig. 1. Difference between pixel based and segment based matching. (a) Edge image at time t;
(b) Edge image of same scene at time t+1; (c) Result obtained by pixel based matching; (d)
Result obtained by segment based matching.

Considering the above-mentioned problems, we present an edge segment based


approach which utilizes three successive frames for moving object detection. In our
proposed method, two difference image edge maps of three successive frames are
utilized to extract moving edges instead of using edge differencing approach. It makes
the system robust against random noise as well as illumination variation. Since the
proposed method does not require any background model for detection, it is
computationally faster and efficient. Moreover, use of most recent frames, embodying
the updated information helps to reduce false detection effectively. In our proposed
method, the difference image edge maps are represented as segments instead of pixels
using an efficiently designed edge class [9]. An edge segment consists of a number of
consecutive edge pixels. This novel representation helps to make the decision on
matching or in any other operations based on entire edge segment rather than an
individual pixel. This representation of edge provides the following benefits:
Reference Independent Moving Object Detection: An Edge Segment Based Approach 503

a) It facilitates to incorporate an efficient and flexible edge-matching algorithm [10]


in our proposed method which reduces the computation time significantly.
b) This type of representation facilitates our method to take decision about a
complete edge segment at a time instead of an individual edge pixel to keep or
discard it from the edge list during matching. Fig. 1 illustrates the advantages of
segment based matching over pixel based matching. Here, pixel based matching
missed 20% edge pixels due to variation of edge localization in different frames.
Segment based matching does not suffer from this problem as it consider all the
points of a segment together. As a result, it reduces the occurrence of scattered
edge pixels in the detection result.
Since moving object segmentation is a separate problem from detection in video
surveillance, we have not considered it in our proposed method. However, because of
segment based representation of edges, our proposed method is able to extract reliable
shape information of moving objects. Incorporating this shape information with image
segmentation algorithm, it is possible to segment out moving objects from current
image efficiently. Segment based representation also makes it possible to incorporate
knowledge to edge segments which can facilitate the higher level processing of video
surveillance such as tracking, recognition, human activity recognition and so on.

2 Description of the Proposed Method


The overall procedure of the proposed method is illustrated in Fig. 2. Detail
description of our method is given in the following subsections.
I n −1 In I n +1

DEn −1 = ϕ (ΔG * Dn −1 ) Dn −1 = I n − I n −1 Dn = I n +1 − I n DEn = ϕ ( ΔG * Dn )

Fig. 2. Flow diagram of the proposed method

2.1 Computation of Difference Image Edge Maps

Simple edge differencing approach suffers a lot with random noise. This is due to the
fact that the appearance of noise created in one frame is different from its successive
frames. This results in change of edge locations to some extent in successive frames.
Hence, instead of using simple edge differencing approach, we utilize difference
image for moving edge detection. Edges extracted from difference image are noise
robust, comparatively stable and hence partially solve the edge localization problem.
Two difference image edge maps are utilized in our proposed method for moving
object detection. To compute difference image edge maps, we compute two difference
images, Dn-1, and Dn utilizing three successive frames In-1, In, and In+1 as follows:
504 M.A.A. Dewan, M.J. Hossain, and O. Chae

Dn = I n − I n +1 (1)

After computing Dn-1 and Dn, canny edge detection algorithm [11] is applied and
generates difference image edge maps, DEn-1 and DEn, respectively. In the difference
image edge maps, edge pixels are grouped together to represent as segments using an
efficiently designed edge class [9]. To make the edge segments more efficient for
moving edge detection procedure, we maintain the following constrains during edge
segment generation:
a) If the edge segment contains multiple branches, then the braches are broken into
multiple edge segments from its branching point.
b) If the edge segment bends more than a certain limit at an edge point, the edge is
broken into two edge segments from that particular position.
c) If the length of a particular edge segment exceeds a certain limit, then the edge
segment is divided into a number of small edge segments of its permitted length.
Segment based representation helps the proposed system to use the geometric
shape of edges during matching for moving edge detection. It also helps to extract
solid edge segments of moving objects instead of extracting scattered or significantly
small edges. In this case no edge pixels are processed independently; rather all the
edge pixels in an edge segment are processed together for matching or in any other
operations. Fig. 3(d) shows the difference image edge maps generated from Fig. 3(a)
and Fig. 3(b). Similarly edge map in Fig. 3(e) is obtained from Fig. 3(b) and Fig. 3(c).

(a) (b) (c) (d)

(e) (f) (g)

Fig. 3. DT image generation and matching. (a) In-1; (b) In; (c) In+1; (d) DEn-1; (e) DEn; (f) DT
image of DEn-1; (g) Edge matching using DT image. Here, Matching_confidence = 0.91287.

2.2 Moving Object Detection

Edge maps, DEn-1 and DEn are used in this step to extract moving edges for moving
object detection in video sequence. DEn-1 contains the moving edges of In-1 and In, and
DEn contains the moving edges of In and In+1, respectively. Thus, the moving edges of
Reference Independent Moving Object Detection: An Edge Segment Based Approach 505

In is common in both of the edge maps. Therefore, to find out moving edges, we
superimpose one edge map on another one and compute matching between them.
Hence, if two edge segments are of almost similar in size and shape, and situated
almost in same positions in the edge maps, then they are considered as moving edges
of In. However, appearance of noise may cause slightly change of these parameters as
well. Hence, instead of exact matching, introducing some variability reduces
localization problem to obtain better results. Considering these issues, we have
adopted an efficient edge-matching algorithm in this proposed method, which is
known as chamfer ¾ matching [10]. According to the procedure of chamfer matching,
distance transform (DT) image is generated from one difference image edge map and
then edge segments from another one are superimposed on it and compute matching
confidence. If the matching confidence is less than a certain threshold then the edge
segment is enlisted as moving edge. This threshold value gives the variability during
matching. In our method, we utilize DEn-1 to generate DT image and thereafter, edge
segments of DEn are superimposed on it to compute the matching confidence.
To compute DT image, we use integer approximation of exact Euclidean distance
to minimize the computation time [10]. Each pixel in DT represents the corresponding
distance to the nearest edge pixel in the edge map. In DT image generation, a two-
pass algorithm is used to calculate the distance values sequentially. Initially the edge
pixels are set to zero and rest of the position is set to infinity. The first pass (forward)
modifies the distance image as follows:
vi , j = min(vi −1, j −1 + 4, vi −1, j + 3, vi −1, j +1 + 4, vi , j −1 + 3, vi , j ) (2)

and thereafter, the second pass (backward) works as follows:


vi , j = min(vi , j , vi , j +1 + 3, vi +1, j −1 + 4, vi +1, j −1 + 3, vi +1, j +1 + 4) (3)

where vi,j is the distance at pixel position (i, j). Fig. 3(f) illustrates a DT image which
is computed from difference image edge map shown in Fig. 3(d). In Fig. 3(f), distance
values of DT image are normalized into 0 to 255 for better visualization.
During matching, an edge segment of DEn is superimposed on DT image of DEn-1
to accumulate the corresponding distance values. A normalized average of these
values (root mean square) is the measure of matching confidence of the edge segment
in DEn, shown in following equation:

1 1 k
Matching _ confidence[l ] = ∑{dist (li )}2
3 k i =1
(4)

where k is the number of edge points in lth edge segment of DEn; dist(li) is the distance
value at position i of edge segment l. The average is divided by 3 to compensate for
the unit distance 3 in the chamfer ¾-distance transformation. Edge segments are
removed from DEn if matching confidence is comparatively higher. Existence of a
similar edge segments in DEn-1 and DEn produces a low Matching_confidence value
for that segment. We allow some flexibility by introducing a disparity threshold,
τ and empirically we set τ = 1.3 in our implementation. We consider a matching
occurs between edge segments, if Matching_confidence[l] ≤ τ . The corresponding
506 M.A.A. Dewan, M.J. Hossain, and O. Chae

edge segment is considered as moving edge and consequently enlisted to the moving
edge list. Finally, the resultant edge list contains the edge segments of MEn that
belong to moving objects in In. Fig. 3(g) illustrates the procedure of computing
matching confidence using DT image.

3 Experimental Results
Experiments have been carried out with several video sequences captured from indoor
as well as outdoor environment to verify the effectiveness of the proposed method.
We have applied our proposed method on video formats of size 640x520 and used
Intel Pentium IV 1.5 GHz processor and 512 MB of RAM. Visual C++ 6.0 and MTES
[12] have been used as of our working tools for implementation.

(a) (b) (c)

(d) (e) (f)

Fig. 4. (a) I150; (b) I151; (c) I152; (d) DE150; (e) DE151; (f) Detected moving edges of I151

Fig. 4 shows experimental result for moving object detection in an outdoor


environment. In this case, three consecutive frames, I150, I151 and I152 shown in
Fig. 4(a), Fig. 4(b) and Fig. 4(c), respectively, are used to compute two difference
images D150 and D151. Thereafter, difference image edge maps, DE150 and DE151 shown
in Fig. 4(d) and Fig. 4(e), are computed using D150 and D151, respectively. Fig. 4(f)
shows the detected moving edges of I151 which were common in both of the difference
image edge maps, DE150 and DE151.
Fig. 5 shows another experimental result obtained with an indoor video sequence.
Fig. 5(a), Fig. 5(b), Fig. 5(c) and Fig. 5(d) show the background and three successive
frames I272, I273, and I274, respectively, with different illumination condition and
quantization error. Result obtained from the method of Kim and Hwang [6] is shown
in Fig. 5(e), where double edge maps have been utilized to detect moving edges. In
Reference Independent Moving Object Detection: An Edge Segment Based Approach 507

their method, the difference between background and current frame incorporates most
of the noise pixels. Fig. 5(f) shows the result applying the method proposed by Dailey
and Cathey [7]. Result obtained from this method is much robust against illumination
changes as it uses most recent successive frame differences for moving edge
detection. However, it suffers from scattered edge pixels as it uses logical AND
operation in difference image edge maps for matching. Illumination variation and
quantization error induces edge localization problem in difference image edge maps.
As a result, some portions of the same edge segment are matched and some are not,
and produce scattered edges in final detection result. Our method does not experience
this problem because of applying flexible matching between difference image edge
maps containing edge segments. The result obtained from our proposed method is
shown in Fig. 5(g).

(a) (b) (c) (d)

(e) (f) (g)

Fig. 5. (a) Background; (b) I172; (c) I173; (d) I174; (e) Detected moving edges of I173 using Kim
and Hwang method; (f) Detected moving edges of I173 using Dailey and Cathey method; (g)
Detected moving edges of I173 using our proposed method

Table 1. Mean processing time in (ms) for each of the module

Processing steps Mean time (ms)


Computation of difference images 5
Edge map generation from difference images 39
DT image generation 11
Computation of matching confidence and
19
moving edge detection
Total time required 74
508 M.A.A. Dewan, M.J. Hossain, and O. Chae

In order to comprehend the computational efficiency of the algorithm, it should be


mentioned that with the processing power and the processing steps described above,
execution time for the moving object detection on grayscale images was
approximately 74 ms. Therefore, the processing speed was around 13 frames per
second. However, using computers with higher CPU speeds which are available this
day and in future as well, this frame rate can be improved. Table 1 depicts
approximate times required to execute different modules of the proposed method.

4 Conclusions and Future Works


This paper presents a robust method for moving object detection which does not
require any background model. Representation of edges as segments helps to reduce
the effect of noise and, incorporates a fast and flexible method for edge matching. So,
the proposed method is computationally efficient and suitable for real time automated
video surveillance system. Our method is robust against illumination changes as it
works on most recent successive frames and utilizes edge information for moving
object detection. However, the presented method is not very effective in the case of
detecting object with very slow movement as it uses three consecutive frames instead
of any background model. The extracted moving edge segments using our proposed
method represent very accurate shape information of moving object. These edge
segments can be utilized for moving object segmentation. Currently we are pursuing
moving object segmentation from moving edges utilizing watershed algorithm. As
segment based representation provides us with shape information of moving object,
the proposed method can be easily extended for tracking, recognition and
classification of moving object. Experimental results and comparative studies with
respect to some other standard methods justify that the proposed method is effective
and encouraging for moving object detection problem.

References
1. Radke, R., Andra, S., Al-Kohafi, O., Roysam, B.: Image Change Detection Algorithms: A
Systematic Survey. IEEE Trans. on Image Processing 14(3), 294–307 (2005)
2. Kastrinaki, V., Zervakis, M., Kalaitzakis, K.: A Survey of Video Processing Techniques
for Traffic Applications. Image and Vision Computing 21(4), 359–381 (2003)
3. Chien, S.Y., Ma, S.Y., Chen, L.: Efficient Moving Object Segmentation Algorithm Using
Background Registration Technique. IEEE Transactions on Circuits and Systems for
Video Technology 12(7), 577–586 (2002)
4. Sappa, A.D., Dornaika, F.: An Edge-Based Approach to Motion Detection. In:
Alexandrov, V.N., van Albada, G.D., Sloot, P.M.A., Dongarra, J.J. (eds.) ICCS 2006.
LNCS, vol. 3991, pp. 563–570. Springer, Heidelberg (2006)
5. Gutchess, D., Trajkovics, M., Cohen-Solal, E., Lyons, D., Jain, A.K.: A Background
Model Initialization Algorithm for Video Surveillance. Proc. of IEEE Intl. Conf. on
Computer Vision 1, 733–740 (2001)
6. Kim, C., Hwang, J.N.: Fast and Automatic Video Object Segmentation and Tracking for
Content-based Applications. IEEE Trans. on Circuits and Systems for Video Tech. 12,
122–129 (2002)
Reference Independent Moving Object Detection: An Edge Segment Based Approach 509

7. Dailey, D.J., Cathey, F.W., Pumrin, S.: An Algorithm to Estimate Mean Traffic Speed
using Un-calibrated Cameras. IEEE Trans. on Intelligent Transportation Sys. 1(2), 98–107
(2000)
8. Makarov, A., Vesin, J.M., Kunt, M.: Intrusion Detection Using Extraction of Moving
Edges, International Conf. on. Pattern Recognition 1, 804–807 (1994)
9. Ahn, K.O., Hwang, H.J., Chae, O.S.: Design and Implementation of Edge Class for Image
Analysis Algorithm Development based on Standard Edge. In: Proc. of KISS Autumn
Conference, pp. 589–591 (2003)
10. Borgefors, G.: Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm.
IEEE Trans. on PAMI 10(6), 849–865 (1988)
11. Canny, J.: A Computational Approach to Edge Detection. IEEE Trans. on PAMI 8(6),
679–698 (1986)
12. Lee, J., Cho, Y.K., Heo, H., Chae, O.S.: MTES: Visual Programming for Teaching and
Research in Image Processing. In: Sunderam, V.S., van Albada, G.D., Sloot, P.M.A.,
Dongarra, J.J. (eds.) ICCS 2005. LNCS, vol. 3514, pp. 1035–1042. Springer, Heidelberg
(2005)

You might also like