You are on page 1of 1

BEWARE

School of Electronic Engineering and Computer Science Queen Mary, University of London London, England, E1 4NS

Behaviour based Enhancement of Wide-Area Situational Awareness in a Distributed Network of CCTV Cameras

BEWARE@dcs.qmul.ac.uk http://www.dcs.qmul.ac.uk/research/vision/projects/BEWARE/

Introduction

RESULTS: This method has been tested on standard datasets (KTH and WEIZMANN) outperforming state-of-the-art approaches.

BEWARE is a project funded by EPSRC and MOD to develop models for video-based people tagging and behaviour monitoring across a distributed network of CCTV cameras for the enhancement of global situational awareness in a wide area.

Camera Network

We installed a modern IP camera network used to develop and to test our methods in realistic environments. This network is composed of 4 indoor and 4 external cameras.

Tracking

Obtain tracks of people using Bayesian statistical techniques and build models of activity within a camera. METHOD: We use Rao-Blackwellised particle lters which uses a combination of Kalman lters and particle lters. RESULTS: Experiments were performed on many data sets, including some from our own camera network.

Multi-Camera Object Tagging

Person association between camera views under varying lighting conditions. METHOD: By modelling the inter-camera foreground illumination changes as brightness mapping functions, observations can be associated between cameras. Extending this to model the background illumination changes allows us to infer new brightness mappings as the illumination changes over time. RESULTS: Our method outperforms existing illumination based intercamera methods due to its ability to model illumination change over time.

Behaviour Correlation Discovery

This work aims to model and distinguish global behaviour correlations from multiple objects for detecting anomalies. METHOD: Decomposition of a scene into local regions to summarise local behaviours. Then model the behaviour correlations through studying different types of behaviour co-occurrences. RESULTS: Automatically recognises temporal phases in a trafc junction and detects a re engine going through the junction, breaking the normal trafc patterns.

Human Action Recognition

Automatically recognize and classify human actions from CCTV videos. METHOD: For each training video we extracted spatial-temporal features by exploiting interest-point detection. Then we train our algorithm by using the obtained features. Finally, given a unknown video sequence we are able to detect the performed action with accuracy of 93%. Research Open Day 2009.

You might also like