You are on page 1of 29

DAI at the MediaEval Visual Privacy Task

Dominique Maniry, Esra Acar, Sahin Albayrak


Competence Center Information Retrieval & Machine Learning

Outline

Introduction

The Foreground Edges Method


Sample Outputs of the Method
Objective & Subjective Evaluation

Improved Foreground Edges Method

Methods for the MediaEval2014 Visual Privacy Task


Privacy-level based Blurring
Abstract Representation

Conclusions

20 June 2014

VideoSense Cluster Workshop

Introduction

MediaEval Visual Privacy Task (VPT) aims at developing solutions to


ensure that the privacy of people in videos is protected.
Running since 2012 within the MediaEval workshop.
Object detections are given in 2013.
The focus of the task is to make persons appearing in videos
unrecognizable.
Evaluation is performed using the PEViD dataset
consists of about 60 high resolution video files of an average length
of 20 seconds each.
contains both indoors and outdoors scenarios (including night-time
videos)
The people shown in the videos perform various actions, such as
exchanging objects, talking, fighting or simply walking by.

20 June 2014

VideoSense Cluster Workshop

The Foreground Edges Method (1)

The face is NOT the only body part which can disclose the
identity of an individual.

Main idea: To replace whole bodies by silhouettes defined by


moving edges.

Based on motion edge detection.

Foreground edges within a person's bounding box, is set to


green.

20 June 2014

VideoSense Cluster Workshop

The Foreground Edges Method (2)

Apply horizontal and vertical Sobel masks (Ex(x,y), and Ey(x,y))


and quantize the edge results (E(x,y)) to one of three levels {0,
1,2}.

Determine frame differences by comparing E(x, y, t) with


background edge pixels B(x, y, t).
0 no significant edges
1 significant edges with the same sign
2 significant edges with different signs
20 June 2014

VideoSense Cluster Workshop

The Foreground Edges Method (3)

Pixels with frame difference values of 2 are considered as


foreground.

Background image is updated regularly after the initialization

The time constant, 0 < < 1.0, controls the speed of foreground
pixel classification change to background.

20 June 2014

VideoSense Cluster Workshop

Sample Outputs of the Method (1)

Dropping a bag.

20 June 2014

Two persons fighting.

VideoSense Cluster Workshop

Sample Outputs of the Method (2)

23. Juni 2014

Sample Outputs of the Method (3)

23. Juni 2014

Sample Outputs of the Method (4)

23. Juni 2014

10

(Objective) Performance Evaluation

20 June 2014

VideoSense Cluster Workshop

11

(Subjective) Performance Evaluation

20 June 2014

VideoSense Cluster Workshop

12

Improved Foreground Edges Method (1)


Aim: To improve foreground segmentation.
Main idea: Determine moving edges by the edge-based
foreground segmentation as follows:

The foreground edge segmentation process


20 June 2014

VideoSense Cluster Workshop

13

Improved Foreground Edges Method (2)

A long-term and a short-term background model are used.

The short-term model and long-term models have different


learning rates.

The x and y gradients


are modelled independently, and
later combined using a thresholding on foreground edges (as
in the Canny edge detection).

In order to control the level of detail in constructed silhouettes,


we employ adaptive thresholds in the method.
20 June 2014

VideoSense Cluster Workshop

14

Improved Foreground Edges Method (3)

With the improved method


Cleaner silhouettes are
obtained, and
False positives are
reduced.

An example output of the improved privacy filter

20 June 2014

VideoSense Cluster Workshop

15

Methods for MediaEval2014 Visual Privacy Task

The VPT task of 2014 puts the emphasis on the human point of
view on privacy.
Only human viewers can determine whether privacy is
protected or not.
The VPT task introduces different insights on what an effective
privacy protection should feature.
General public from online communities,
Video surveillance staff as trained CCTV monitoring
professionals, and
Focus group comprising video-analytics technology and
privacy protection solutions developers.

20 June 2014

VideoSense Cluster Workshop

16

Privacy-level based Blurring

Privacy-level based Blurring contains three steps:


Step 1: Blur according to privacy annotation
Step 2: Reduce number of colors
Step 3: Remap colors

20 June 2014

VideoSense Cluster Workshop

17

Step 1: Blur

20 June 2014

VideoSense Cluster Workshop

18

Step 2: Reduce Colors

20 June 2014

VideoSense Cluster Workshop

19

Step 3: Remap Colors

20 June 2014

VideoSense Cluster Workshop

20

Sample Outputs of the Method (1)

20 June 2014

VideoSense Cluster Workshop

21

Sample Outputs of the Method (2)

20 June 2014

VideoSense Cluster Workshop

22

Sample Outputs of the Method (3)

20 June 2014

VideoSense Cluster Workshop

23

Discussion on Privacy-level based Blurring


Pros

Cons

Parameters to tune trade-off between Identity related details can leak


privacy and intelligibility (blur intensity
through shape.
and number of colors).
Remapped colors can convey
additional information.
Different regions can have different
privacy levels by using different blur
intensities (e.g. face blurred more
than full body).
Simple.

20 June 2014

VideoSense Cluster Workshop

24

Abstract Representation
This years annotations simulate perfect action recognition.
Idea: Completely replace persons with an abstract
representation and display actions using color and overlays.
Re-render relevant objects on background model and annotate
if necessary.

20 June 2014

VideoSense Cluster Workshop

25

Unusual Events
Person blobs change to red when an unusual event occurs.
Action is annotated (fighting, stealing or dropping bag).

20 June 2014

VideoSense Cluster Workshop

26

Discussion on Abstract Representation


Pros

Cons

Maximum privacy.

Person representation can be


unintuitive.
Needs a background model.
Would need a fallback in a real system
based on the confidence of action
recognition.

20 June 2014

VideoSense Cluster Workshop

27

Conclusions

The user study shows that the basic foreground edges filter is
able to provide privacy while maintaining intelligibility.

We initialize the background using the first frame of a video.


The first frame of a video might already contain an individual.

The improved foreground edge filter led to cleaner silhouettes


and reduced false positives.

20 June 2014

VideoSense Cluster Workshop

28

Thanks!
Esra Acar

DAI-Labor

M.Sc.

Researcher
Competence Center Information Retrieval &
Machine Learning

Technische Universitt Berlin


Fakultt IV Elektrontechnik & Informatik

esra.acar@dai-labor.de
Fon +49 (0) 30 / 314 74 013
Fax +49 (0) 30 / 314 74 003

Sekretariat TEL 14
Ernst-Reuter-Platz 7
10587 Berlin, Deutschland

www.dai-labor.de
20 June 2014

VideoSense Cluster Workshop

29

You might also like