You are on page 1of 25

5/13/2008

Admin
Assignment 3 due

Deblurring & Deconvolution

Last lecture
Move to Friday?

Lecture 10

Projects
Come and see me

Different types of blur


Camera shake
User moving hands

Scene motion
Objects in the scene moving

Defocus blur [NEXT WEEK]


Depth of field effects

5/13/2008

Overview

Lets take a photo

Removing Camera Shake


Blurry result

Non-blind
Blind

Removing Motion Blur


Non-blind
Blind

Focus on software approaches

Slow-motion replay

Slow-motion replay

Motion of camera

Image formation model: Convolution

Input to algorithm

Model is approximation
Assume static scene

Non-blind

Blind

=
Blurry image

Blind vs Non-blind

Blur
kernel

Sharp image
Desired output

Convolution
operator

5/13/2008

Camera Shake is it a convolution?

Dots from each corner


Person 1

8 different people, handholding camera, using 1 second exposure

Person 2

Top
left

Top
right

Bot.
left

Bot.
right

Person 4

Person 3

What if scene not static?

Overview

Partition the image into regions

Removing Camera Shake


Non-blind
Blind

Removing Motion Blur


Non-blind
Blind

Deconvolution is ill posed

Deconvolution is ill posed

f x = y

f x = y
Solution 1:

Solution 2:

Slide from Anat Levin

=
Slide from Anat Levin

5/13/2008

Blur kernel

Idea 1: Natural images prior


What makes images special?

Frequency

spectrum

spectrum

Sharp Image

spectrum

Convolution- frequency domain representation

1st observed image

Natural
Image

Frequency

1-D Example

Output spectrum has zeros


where filter spectrum has zeros

Spatial convolution

frequency multiplication

Deconvolution with prior

?
?

Slide from Anat Levin

Comparing deconvolution algorithms


(Non blind) deconvolution code available online:
http://groups.csail.mit.edu/graphics/CodedAperture/

Slide from Anat Levin

Derivatives prior

Input

Low

Equal convolution error

put a penalty on gradients

| f x y |2 + i (xi )
Convolution error

gradient

Natural images have sparse gradients

Slide from Anat Levin

x = arg min

Unnatural

Frequency

( x ) = x

( x ) = x

0 .8

spread gradients

localizes gradients

Gaussian prior

Sparse prior

+
High
Richardson-Lucy

Comparing deconvolution algorithms


(Non blind) deconvolution code available online:
http://groups.csail.mit.edu/graphics/CodedAperture/

Slide from Anat Levin

Input

Richardson-Lucy

( x ) = x

( x ) = x

0 .8

spread gradients

localizes gradients

Gaussian prior

Sparse prior

5/13/2008

Application: Hubble Space Telescope


Launched with flawed mirror
Initially used deconvolution to correct
images before corrective optics installed
Image of star

Non-Blind Deconvolution
Matlab Demo

Overview

http://groups.csail.mit.edu/graphics/Code

Removing Camera Shake

dAperture/DeconvolutionCode.html

Non-blind
Blind

Removing Motion Blur


Non-blind
Blind

Overview
Joint work with B. Singh, A. Hertzmann, S.T. Roweis & W.T. Freeman

Removing Camera Shake from


a Single Photograph

Original

Our algorithm

Rob Fergus, Barun Singh, Aaron Hertzmann,


Sam T. Roweis and William T. Freeman
Massachusetts Institute of Technology
and
University of Toronto

5/13/2008

Close-up

Original

Image formation process

Nave sharpening

Our algorithm

=
Blurry image
Input to algorithm

Blur
kernel

Sharp image
Desired output

Model is approximation
Assume static scene

Convolution
operator

Existing work on image deblurring

Existing work on image deblurring

Old problem:

Software algorithms for natural images

Trott, T., The Effect of Motion of Resolution,


Photogrammetric Engineering, Vol. 26, pp. 819-827, 1960.

Slepian, D., Restoration of Photographs Blurred by Image


Motion,
i Bell
ll System
S
Tech.,
h Vol.
l 46,
6 No. 10,
0 pp. 23
2353-2362,
3 2362 1967.
96

Many require multiple images


Mainly Fourier and/or Wavelet based
Strong assumptions about blur
not true for camera shake

Assumed forms of blur kernels

Image constraints are frequency-domain power-laws

Existing work on image deblurring

Why is this hard?

Hardware approaches

Simple analogy:
11 is the product of two numbers.
What are they?

Image stabilizers

Dual cameras

Ben-Ezra & Nayar


CVPR 2004

Coded shutter

Raskar et al.
SIGGRAPH 2006

Our approach can be combined with these hardware methods

No unique solution:
11 = 1 x 11
11 = 2 x 5.5
11 = 3 x 3.667
etc..
Need more information !!!!

5/13/2008

Natural image statistics

Sharp image

Blur kernel

Characteristic distribution with heavy tails


Histogram of image gradients

Log # pixels

Multiple possible solutions

Blurry image

Blurry images have different statistics

Parametric distribution
Histogram of image gradients

Log # pixels

Log # pixels

Histogram of image gradients

Use parametric model of sharp image statistics

Uses of natural image statistics

Three sources of information


1. Reconstruction constraint:

Denoising [Portilla et al. 2003, Roth and Black, CVPR 2005]


Superresolution [Tappen et al., ICCV 2003]

Intrinsic images [Weiss, ICCV 2001]

Inpainting
I
i i [L
[Levin
i et al.,
l ICCV 2003]
Reflections [Levin and Weiss, ECCV 2004]
Video matting [Apostoloff & Fitzgibbon, CVPR 2005]
Corruption process assumed known

Estimated sharp image

Estimated
blur kernel

2. Image prior:
Distribution
of gradients

Input blurry image

3. Blur prior:
Positive
&
Sparse

5/13/2008

Three sources of information


y = observed image

b = blur kernel

Three sources of information

x = sharp image

y = observed image

b = blur kernel

x = sharp image

p( b; xjy)
Posterior

1. Likelihood p( yjb; x )

Three sources of information

y = observed image
y = observed image

b = blur kernel

b = blur

x = sharp image

x = sharp image

Reconstruction constraint:

p( b; xjy) = k p( yjb; x) p( x) p( b)
Posterior

1. Likelihood
(Reconstruction
constraint)

2. Image
prior

3. Blur
prior

p( yjb; x ) =

2
i N ( y i jx i - b; )
( x i - b y i ) 2
22

ie

i - pixel index

3. Blur prior p( b)

2. Image prior p( x)
y = observed image

p( x) =

b = blur

x = sharp image

Q PC
2
i c= 1 c N ( f ( x i ) j0; sc )

y = observed image

p( b) =

b = blur

x = sharp image

Q PD
j
d= 1 d E( bj j d)
70

i - pixel index

Mixture of Exponentials

60

p(b)

No connectivity constraint

Most elements near zero

50

Positive & sparse


Log # pixels

Mixture of Gaussians fit to


empirical distribution of
image gradients

40

30

A few can be large

20

c - mixture component index

j - blur kernel element

f - derivative filter

d - mixture component index

10

0
0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

5/13/2008

The obvious thing to do

Variational Bayesian approach

p( b; xjy) = k p( yjb; x) p( x) p( b)
Posterior

2. Image
prior

1. Likelihood
(Reconstruction
constraint)

3. Blur
prior

Keeps track of uncertainty in estimates of image and blur by


using a distribution instead of a single estimate

Optimization surface
f
ffor a
single variable

Combine 3 terms into an objective function


Run conjugate gradient descent
Score

This is Maximum a-Posteriori (MAP)

No success!

Maximum
a-Posteriori (MAP)
Variational
Bayes

Pixel intensity

Variational Independent Component Analysis


Miskin and Mackay, 2000

Overview of algorithm

Binary images

Input image

Priors on intensities

1. Pre-processing

Small,, synthetic
y
blurs

2. Kernel estimation
Not applicable to
natural images

Multi-scale approach

3. Image reconstruction
-

Digital image formation process


Gamma
correction

RAW
values

Remapped
values

Standard non-blind deconvolution routine

Preprocessing
Input image
Convert to
grayscale

Blur process applied here

Remove gamma
correction
User selects patch
from image

Bayesian inference
too slow to run on
whole image
Infer kernel
from this patch
P. Debevec & J. Malik, Recovering High Dynamic Range Radiance Maps from Photographs, SIGGRAPH 97

5/13/2008

Inferring the kernel: multiscale method

Initialization
Input image

Input image
Convert to
grayscale

Convert to
grayscale

Remove gamma
correction

Remove gamma
correction

User selects patch


from image

User selects patch


from image
L
Loop
over scales
l

Initialize 3x3
blur kernel

Upsample
estimates

Variational
Bayes

Initialize 3x3
blur kernel

Use multi-scale approach to avoid local minima:


Initial image estimate

Blurry patch

Initial blur kernel

Image Reconstruction
Input image
Convert to
grayscale

Remove gamma
correction
User selects patch
from image

L
Loop
over scales
l

Full resolution
blur estimate

Upsample
estimates

Non-blind deconvolution
(Richardson-Lucy)

Variational
Bayes

Initialize 3x3
blur kernel

Deblurred
image

Synthetic example
Sharp image

Synthetic
experiments
i
t

Synthetic blurry image


Artificial
blur trajectory

10

5/13/2008

Image before

Inference initial scale

Kernel before

Image before

Inference scale 3

Kernel before

Image before

Kernel before

Inference scale 5

Image after

Image before

Kernel after

Kernel before

Image after

Image before

Kernel after

Kernel before

Image after

Image before

Kernel after

Kernel before

Inference scale 2

Image after

Kernel after

Inference scale 4

Image after

Kernel after

Inference scale 6

Image after

Kernel after

11

5/13/2008

Image before

Inference Final scale

Image after

Comparison of kernels
True kernel

Estimated kernel

Kernel after

Kernel before

Blurry image

Blurry image

Matlabs deconvblind

Our output

12

5/13/2008

True sharp image

What we do and dont model


DO

Gamma correction
Tone response curve (if known)
DONT

Saturation
Jpeg artifacts
Scene motion
Color channel correlations

Results on real images


Submitted by people from their own photo collections

Real
experiments
i
t

Type of camera unknown


O
Output
d
does contain
i artifacts
if
Increased noise
Ringing

Compare with existing methods

13

5/13/2008

Original photograph

Close-up
Original

Output

Our output

Blur kernel

Original photograph

Close-up

Original

Matlabs deconvblind

Our output

Matlabs
deconvblind

14

5/13/2008

Our output

Blur kernel

Original image

Photoshop sharpen more

Original photograph
Close-up

Close-up of image

Blur kernel

Close-up of our output

Original image

Our output

Blur kernel

15

5/13/2008

Close-up

Our output

Blur kernel

Original image

Our output

Blur kernel

What about a sharp image?


Original photograph

Blur kernel

Our output

Blur kernel

Original photograph

Our output

16

5/13/2008

Close-up
Original image

Our output

Blur kernel

Original photograph

Blurry image patch

Original photograph

Our output
Blur kernel

Our output

Close-up of bird
Blur kernel

Original

Unsharp mask

Our output

17

5/13/2008

Blur kernel
Original photograph

Image artifacts & estimated kernels


Blur kernels

Our output

Code available online


http://cs.nyu.edu/~fergus/research/deblur.html

Image patterns
Note: blur kernels were inferred from large image patches,
NOT the image patterns shown

Summary
Method for removing camera shake
from real photographs
First method that can handle
complicated blur kernels
Uses natural image statistics
Non-blind deconvolution
currently simplistic

Overview
Removing Camera Shake
Non-blind
Blind

Removing Motion Blur


Non-blind
Blind

Things we have yet to model:


Correlations in colors, scales, kernel continuity
JPEG noise, saturation, object motion

18

5/13/2008

Input Photo

Deblurred Result

Traditional Camera

Our Camera

Shutter is OPEN

Flutter Shutter

19

5/13/2008

Shutter is OPEN and


CLOSED

Comparison of Blurred Images

Lab Setup

Implementation

Completely Portable

S
Sync
Function
F
ti

Blurring
g
==
Convolution

Preserves High
Frequencies!!!

Traditional Camera: Box Filter

Flutter Shutter: Coded Filter

20

5/13/2008

Comparison

Inverse Filter stable

Inverse Filter Unstable

Short Exposure

Long Exposure

Coded Exposure

Overview
Removing Camera Shake
Non-blind
Our result

Blind

Removing Motion Blur


Non-blind
Matlab Lucy

Ground Truth

Blind

Use statistics to determine blur size


Assumes direction of blur known

21

5/13/2008

Input image

Deblur whole image at once

Local Evidence

Proposed boundary

Result image

Input image (for comparison)

22

5/13/2008

p( b; xjy) = k p( yjb; x) p( x ) p( b)
Let y = 2
2 = 0.1

N ( yjbx ; 2 )

p( b; xjy) = k p( yjb; x) p( x ) p( b)

p( b; xjy) = k p( yjb; x) p( x ) p( b)

Gaussian distribution:

N ( xj0;
j0 2)

MAP solution

Marginal distribution p(b|y)


R

p( b; xjy) dx = k

p( yjb; x) p( x) dx

Highest point on surface: ar gm ax b;x p( x; bjy)

0.16

0.16

0.14

0.14

0.12

0.12

0.1

Bayes p(b|y)

Bayes p(b|y)

p( bjy) =

0.08

0.06

0.04

0.06

0.04

0.02

0.1

0.08

0.02

10

10

23

5/13/2008

MAP solution
Highest point on surface: ar gm ax b;x p( x; bjy)

Variational Bayes
True Bayesian
approach not
tractable

Approximate
posterior
with simple
distribution

Fitting posterior with a


Gaussian
Approximating distribution q( x ; b) is Gaussian

KL-Distance vs Gaussian
width
11

10

Minimize K L ( q( x; b) jj p( x; bjy) )
KL(q||p)

4
0

Fitting posterior with a


Gaussian
Approximating distribution q( x ; b) is Gaussian

0.1

0.2

0.3

0.4

Gaussian width

0.5

0.6

0.7

Variational Approximation of Marginal


2.5

Minimize K L ( q( x; b) jj p( x; bjy) )

Variational

True
marginal
i l

p(b|y)

1.5

MAP

0.5

10

24

5/13/2008

Setup of Variational Approach


Try sampling from the model
1

Work in gradient domain:

0.9

Let true b = 2

0.8

x - b= y !

Repeat:

p(b|y)

0.7

r x - b= r y

0.6

Approximate posterior
with q( r x; b)

0.5
0.4

Sample x ~ N(0,2)

Sample n ~ N(0,2)

y = xb + n

Compute pMAP(b|y), pBayes(b|y) & pVariational(b|y)

Multiply with existing density estimates (assume iid)

0.3
0.2

Assume

0.1
0

10

p( r x ; bjr y)

q( r x ; b) = q( r x ) q( b)

q( r x ) is Gaussian on each pixel


q( b) is rectified Gaussian on each blur kernel element
Cost function

K L ( q( r x) q( b) jj p( r x; bjr y) )

25

You might also like