17 views

Uploaded by Nuzath Unnisa

digital camera image processing..Digital cameras are fun to use because they produce a finished image seconds after the shutter button is pressed. But in those few seconds much has been done to creat that full-color rendering. Even more was done during the camera design process long before the camera ever reaches the user's hands. This topic presents the details of color processing that is performed on the raw data from the camera's sensor. Color filter array interpolation, color calibration, antialiasing, infrared rejection and white point correction are to be discussed. Also, it is explained how to design certain critical camera elements, such as the color filter. Finally, the entire color processing chain is put in perspective by examining how these components are assembled in a commercially available digital camera. Image storage format issues will also be discussed

- AVC307R Manual
- MIE1622H - Assignment 3
- [1]
- Gonzalez and Woods Digital Image Processing 3rd Edition
- Itc Unit III
- Intro Color Demo
- Image Compression
- Final image compression based fuzzy
- 60802 Blooming
- 02
- Readme
- BME 3111 S#5 Data Reduction Techniques.pptx
- Unit1_1 (1)
- B.E._ETC.pdf
- digikam
- Date PreProcessing
- Saranya
- Computer Network Sol
- g 03113844
- Using Triangular Function To Improve Size Of Population In Quantum Evolution Algorithm For Fractal Image Compression[

You are on page 1of 21

Konstantinos N. Plataniotis

University of Toronto kostas@dsp.utoronto.ca

Image and Video Processing

Digital Camera Processing cDNA Micro-array Imaging Visual Object Compression Universal Multimedia Access

Multimedia Security

Authentication & Fingerprinting Image and Video Encryption Visual Secret Sharing Secure Streaming Watermarking

Face Recognition Gait Recognition Visual Surveillance

Professors:

Dimitris Hatzinakos (dimitris@comm.utoronto.ca) Kostas Plataniotis (kostas@dsp.utoronto.ca)

Lukac

Digital imaging 25,000,000 14,100,000 6,730,000 972,000 623,000 52,800

Is it possible to compensate for the limitations of the sensor using Digital Signal Processing Solutions ?

Digital image processing Color image processing Single-sensor camera Color filter array (CFA) Bayer CFA

Outline

The problem Background

K1

color image Parrots

R channel G channel B channel

image column k2

spatial position i = (k1 1) K1 + k2 image sample xi = (186, 48, 42) xi1 = 186 xi 2 = 48

xi 3 = 42

image row k1

(number of image rows)

RGB image

RG image

RB image

GB image

digital cameras - most popular and widely used synthetic (e.g. grey-scale image coloration)

Red

Color vector is uniquely characterized by: its magnitude (length) direction (orientation)

R

Yellow

commonly used for acquisition, storage, and displaying purposes additive concept of color composition

1 Red xi xj Magenta

M xi = xi = ( xi1 ) 2 + ( xi 2 ) 2 + ( xi 3 ) 2

x x x Dxi = i1 , i 2 , i 3 ; Mx Mx Mx i i i

R unit sphere 1

Dxi = 1

xi1 M xi xi 3 M xi

Green

Yellow

( xi 2 ) 2

xi1 xi

0

( xi1 ) 2

xi

xi 2 M xi

RGB color pixel is the vector in a three-dimensional (RGB) color space vector components are the intensities measured in RGB color channels

1

xi 3

B

Dx

0

i

xi 2

( xi 3 )

G 1

1 B

digital imaging age images to be captured are available instantly commercial proliferation:

imaging enabled wireless phones pocket devices (PDAs) surveillance and automotive applications

Basic hardware architectures: three-sensor device single-sensor device (used in consumer-grade digital cameras) X3 technology based device Focus on functionality vs. cost

optics (optical zoom), digital zoom, memory, battery, etc. multimedia acquisition, processing & transmission (image, audio and video)

Three-sensor imaging

sensor is a monochromatic device and the most expensive component of the digital camera (10% to 25% of the total cost) charge-coupled device (CCD) complementary metal oxide semiconductor (CMOS) sensor

R filter + sensor optical system image scene G filter + sensor B filter + sensor color filters + image sensors (CCD/CMOS) sensor data sensor data arranged as RGB color data

Layered (three-layer) silicon sensor

new technology - expensive solution for professional devices (medical & science applications) directly captures RGB light at each spatial location in an image during a single exposure takes advantage of the natural light absorbing characteristics of silicon

+

camera output

color filters are stacked vertically and ordered according to the energy of the photons absorbed by silicon

professional design

Single-sensor imaging

sensor data optical system image scene CFA + sensor color filter array + image sensor (CCD/CMOS) sensor data arranged as RGB color data demosaicking camera output

Key factors in CFA design

immunity to color artifacts and color moir cost-effective reconstruction reaction of the pattern to image sensor imperfections immunity to optical/electrical cross talk between neighboring pixels

used to produce a 2-D array or mosaic of color components produced CFA (sensor) image is a gray-scale image full-color image is obtained by demosaicking

i) three-color (RGB, YMC) systems - RGB is most widely used ii) mixed primary/complementary colors (e.g. MGCY pattern) iii) four and more color systems (white and/or colors with shifted spectral sensitivity) CFAs in ii) and iii) may produce more accurate hue gamut, but they limit the useful range of the darker colors

lens, zoom, focus aperture and shutter viewfinder image scene optical system infrared blocking, anti-aliasing optical filter blocking system (Bayer) CFA image sensor (CCD,CMOS)

Bayer CFA

Yamanaka CFA

flash user interface power supply (battery, AC) microprocessor DRAM buffer bus

A/D converter

stick memory media (card) color LCD display PC / TV interface (USB, A/V)

Diagonal Bayer CFA Pseudo-random CFA Pseudo-random CFA HVS based design

DRAM buffers & stores digital data obtained from the A/D converter DRAM passes data to application-specific integrated circuits (ASIC) DSP operations, such as demosaicking and image re-sizing may be realized in both ASIC and micro-processor levels

Processing

demosaicking (spectral interpolation) demosaicked image postprocessing (color image enhancement) camera image zooming (spatial interpolation in CFA or fullcolor domain)

Implementation

Conventional digital camera

real-time constraints (computational simplicity requirements)

CFA data camera image processing storage digital camera

Compression

lossy (or near lossless) vs. lossless compression CFA image compression vs. demosaicked image compression

PC interfaces with the digital camera which stores the images in the raw CFA format allows for the utilization of sophisticated solutions

camera (CFA) image indexing connection to image retrieval camera image encryption

CFA data

storage

Ability to follow the spectral image characteristics

component-wise (marginal) processing (component component) spectral model-based processing (vector component) vector processing (vector vector)

Component-wise processing

each color plane processed separately omission of the spectral information results in color shifts and artifacts

input color image camera image processing camera image processing camera image processing output color image

non-adaptive processing (data) adaptive processing

essential spectral information utilized during processing

camera image processing camera image processing input color image camera image processing output color image

spectral model used to eliminate color shifts and artifacts

input camera image edge-sensing mechanism used to eliminate edge-blurring and to produce sharply-looking fine details

Edge-sensing mechanism

Spectral model

outputted camera image

Vector processing

image pixels are processed as vectors computationally expensive

input color image camera image processing output color image

Non-adaptive processing

no data-adaptive control often reduces to linear processing - easy to implement inefficient performance (image blurring)

Data-adaptive processing

Construction

using spatial, structural, and spectral characteristics

Edge-sensing mechanism Spectral model

processing

no parameters or fixed setting

x( p,q ) =

( i , j )

{w(i , j ) (x (i , j ) , x ( p , q ) )}

w(i , j ) = w( i , j ) /

( i , j )

w( i , j )

input camera image estimation operations generalized camera solutions outputted camera image

Adaptive processing

edge-sensing weights used to follow the structural content nonlinear processing essential in producing high-quality, sharply looking images

parameters adaptation

Spatial characteristics

local neighborhood area

processing

Structural characteristics

edge-sensing mechanism

( z ) {w(i , j ) , (i, j ) }

z denotes the CFA image

Spectral characteristics

spectral model

Features

approximation using a shape mask shape and size of vary depending on the CFA used and processing task (demosaicking, resizing, etc.) shape masks widely used in the demosaicking process:

+ + + + +

(a)

(b)

(c)

(d)

(e)

Essential to produce sharply looking images

structural constraints impossed on the camera solution relate to the form of the ESM operator used to generate the edgesensing weights

Conventional designs:

operate on large (5x5,7x7) neighborhood specialization on a particular CFA (e.g. Bayer CFA):

( z ) {w(i , j ) , (i, j ) }

w( p -1, q ) = 1/(1+ | z( p -2, q ) z( p , q ) | + | z( p -1, q ) z( p +1, q ) |) w( p , q 1) = 1/(1+ | z( p , q 2) z( p , q ) | + | z( p , q 1) z( p , q +1) |) w( p , q +1) = 1/(1+ | z( p , q + 2) z( p , q ) | + | z( p , q +1) z( p , q 1) |) w( p +1, q ) = 1/(1+ | z( p + 2, q ) z( p , q ) | + | z( p +1, q ) z( p 1, q ) |) = {( p 1, q 1), ( p 1, q + 1), ( p + 1, q 1), ( p + 1, q + 1)} w( p -1, q 1) = 1/(1+ | z( p -2, q 2) z( p , q ) | + | z( p -1, q 1) z( p +1, q +1) |) w( p -1, q +1) = 1/(1+ | z( p -2, q + 2) z( p , q ) | + | z( p -1, q +1) z( p +1, q 1) |) w( p +1, q 1) = 1/(1+ | z( p + 2, q 2) z( p , q ) | + | z( p +1, q 1) z( p 1, q +1) |) w( p +1, q +1) = 1/(1+ | z( p + 2, q + 2) z( p , q ) | + | z( p +1, q +1) z( p 1, q 1) |)

both structural and spatial characteristics are considered in the ESM construction

Concept

ESM operator uses some form of inverse gradient of the samples in the CFA image

w( i , j )

1 = 1 + f ( (i , j ) )

large image gradients usually indicate that the corresponding vectors are located across edges (penalized through small weights)

Cost-effective, universal design

operates within the shape mask aggregation concept defined here over the four-neighborhoods only desing suitable for any existing CFA

takes into consideration both the spectral and the spatial characteristics of the neighboring color pixels

pixel occupying location to be interpolated x( p,q) = [ x( p,q)1 , x( p,q)2 , x( p,q)3 ] pixel occupying neighboring location

w( i , j ) = 1/(1 +

( g , h )

CFA data demosaicking storage digital camera

| x

(i , j ) k

x( g , h ) k |)

CFA data

x( p , q ) k / x(i , j ) k = x( p , q )2 / x( i , j )2 ;

k = 1 or k = 3

w( i , j ) = (1 + exp{

( g , h )

demosaicking

normalized color ratio model (hue constancy is enforced in both in edge transitions and uniform image areas)

| x( i , j ) k x( g , h ) k |})

( x( p , q ) k + ) /( x(i , j ) k + ) = ( x( p , q )2 + ) /( x(i , j )2 + )

color difference model (constrained component-wise magnitude difference)

x( p , q ) k x( i , j ) k = x( p , q )2 x(i , j )2

Vector SM

Modelling assumption

two neighboring vectors should have identical color chromaticity properties (directional characteristics) two spatially neighboring vectors should be collinear in the RGB (vector) color space

Vector SM

Unique quadratic equation solution

y = y1 = y2 = b 2a

(due to zero discriminant)

b 2 4ac = 0

Geometric interpretation

from two-component vector expression

G

chromaticity line interpolated component available components

Computational approach

x ( p , q ) .x (i , j ) = x ( p , q ) x (i , j ) cos x ( p , q ) , x ( i , j )

)

=1

for G component

x( p , q ) , x(i , j ) = 0

k =1 ( p , q ) k

x(i , j ) k

3

x2 k =1 ( p , q ) k

x( p , q )2 =

x( p , q ) k x(i , j )2 x( i , j ) k

x2 k =1 ( i , j ) k

x( p ,q ) k x(i , j ) k x (i , j ) x(i , j )2

x ( p ,q ) x( p ,q )2

any color component can be determined from the expression above by solving the quadratic equation expression ay 2 + by + c = 0 y denotes the component to be determined, e.g.

for R or B component

y = x( p , q )2

x( p , q ) k =

x( p , q )2 x(i , j ) k x(i , j )2

R (or B)

Vector SM

Geometric interpretation

from three-component vector expression for G component

R

Generalized vector SM

b y = y1 = y2 = 2a

modifies their directional characteristics and normalizes their component-wise magnitude differences

x( p , q )2 =

x ( p ,q ) x(i , j )

x ( p , q ) + I, x ( i , j ) + I = 0

=1

for R component

G G

chromaticity line component to be calculated available components

x( p , q )1 =

G B

x( p ,q ) k + 2 x( i , j ) k + x (i , j )

x ( p ,q )

x( p ,q ) k via

>> 0 =0

x( p ,q ) k via

for B component

x( p ,q ) k

x( p ,q )2 + x(i , j )2 + k

R (or B)

x( p , q )3 =

x( i , j ) k x(i , j ) x( i , j )2

x ( p ,q ) x( p ,q )2

2 k

x( p ,q )2

R (or B)

R (or B)

Generalized vector SM

Features

universal solution easy to implement tuning of both directional and magnitude characteristics during camera image processing operations generalization of all previous spectral models:

non-shifted vector model

Transforms a gray-scale CFA image into a demosaicked, full-color image

acquired image z :Z2 Z q

p

K1

eq. (1) (for Bayer CFA) color restoration

restored image y : Z2 Z3

= 0, three-component expression)

two-component expression)

normalized color ratio model (two-component expression) color ratio model color difference model

( = 0, ( , two-component expression)

x( p , q ) k =

( i , j )

K2

x((ip,,jq)) k = y

{w

(i , j ) (i , j ) ( p , q ) k

x( p,q )

[ z( p , q ) , 0, 0] for p odd and q even, = [0, 0, z( p , q ) ] for p even and q odd, [0, z , 0] otherwise ( p,q )

(1)

No full-color image without demosaicking

an integral and probably the most common processing step used in digital cameras it is often connected with the demosaicked image postprocessing viewed as a correction step

demosaicking process (mandatory) original R and B CFA data Bayer image G plane population R plane populated using SM B plane populated using SM Restored color image correction process (optional) original R and B CFA data G plane corrected via SM correction using R or B color components R plane corrected using SM B plane corrected using SM

Full-color image enhancement

postprocessing the demosaicked image is an optional step implemented mainly in software and activated by the end-user

scene

Corrected color image pleasing for viewing

optics

A/D

demosaicking

postprocessing

demosaicked (full-color) image color image enhancement postprocessed demosaicked image with enhanced quality

localizes and eliminates false colors created during demosaicking improves both the color appearance and the sharpness of the demosaicked image unlike demosaicking, postprocessing can be applied iteratively until certain quality criteria are met

demosaicking and demosaicked image postprocessing are two processing steps are fundamentally different, although may employ similar, if not identical, signal processing concepts postprocessing of demosaicked images is a novel application of great importance to both the end-users and the camera manufacturers

BI MFI CHI ECI SAIG

Universal demosaicking

Information about the arrangements of color filters in the actual CFA

available either from the camera manufacturer (when demosaicking is implemented in the camera), or

(a)

obtained from the raw CFA image stored in TIFF-EP format (when demosaicking is performed in a companion PC)

(b) (c) (d) (e) (f)

can be used with an arbitrary RGB CFA a canonical framework which readily accommodates a large number of application requirements and implementation constraints

(a)

demosaicking/postprocessing operations are guided by vectorial fields of location flags (denoting the presence of color components in the image)

(b) (c) (d) (e) (f)

control mechanism is used to denote the processing locations with sufficient amount of the available neighboring color components

original image

K1

Mean Absolute Error (MAE)

MAE = 1 3 NM

k =1 i =1 j =1

oik, j xik, j

K2

MSE =

1 3 NM

(

3 N M k =1 i =1 j =1

oik, j xik, j

demosaicking

expressed in CIE LUV color space

processing error

restored image

NCD =

1 NM

( L

i =1 j =1

o i, j

o x o x Lx i , j ) + ( ui , j ui , j ) + ( vi , j vi , j )

2 2

1 NM

( L ) + ( u ) + ( v )

i =1 j =1

2 o i, j

o 2 i, j

o 2 i, j

Impact on image quality:

quality significantly varies with both the CFA and the input (captured) image

Motivation technological advances -> miniaturization of single-sensor cameras

pocket devices, mobile phones and PDAs -> low optical capabilities and computational resources

resolution of the camera output

increased complexity for pseudo-random and random CFAs Bayer CFA offers one of the simplest color reconstruction CFA solution A solution B

Zooming in the RGB domain

conventionally used

CFA image data

Zooming in the RGB domain

conventionally used

slower - more samples to process amplification of the imperfections introduced during demosaicking

demosaicking

zoomed image

image

novel approach

CFA image data CFA image zooming

demosaicking zoomed image

x(p+1,q+1)

image lattice

Pixel arrangements observed during processing

(p1,q1) (p1,q+1) (p1,q)

Zooming methods adaptive vs. non-adaptive component-wise vs. vector

no enough information

(p+1,q1) (p+1,q+1) (p+1,q)

no enough information

original

component-wise median

vector median

Filling CFA components conventional approach destroys

the underlying CFA structure specially designed filling operations

G interpolation step

z1 z2 z4 z3 z3 z4 z1 z2

interpolator

z( p , q ) = wj z j

j =1

edge-sensing weight

wi =

1 1 + zi z j

j =1 4

R interpolation steps utilizes both spatial and

z = z( p , q 1) + spectral image characteristics ( p , q )

spectral quantities are formed using spatially shifted samples

B interpolation steps

4

w z

j =1 4 j

w

j =1

= z( p , q 1) + wj z j

j =1

diagonal symmetry

compared to R components

z( p , q ) = z( p 1, q ) +

w z

j =1 4 j

w

j =1

= z( p 1, q ) + wj z j

j =1

zi = Ri Gi

zi = Bi Gi

original image o: Z2 Z3

original images

q p

processing error | o y |: Z 2 Z 3

restored image y : Z2 Z3

K1

K2

demosaicking

K2 2

K1 2

down-sampled image o : Z 2 Z Bayer image

z ': Z2 Z

z :Z2 Z

x: Z2 Z3

Camera image storage: either in a CFA format or as a full-color images more expensive devices uses tagged image file format for electronic photography (TIFF-EP) - storage of details about camera setting, spectral sensitivities, and illuminant used consumer digital cameras store the full-color image in a compressed format using the Joint Photographic Experts Group (JPEG) standard exchangeable image file (EXIF) format - metadata information about the camera and the environment New possibilities: CFA image data compression - allows for the storage and transmission of significantly less information compared to the full-color image compression solutions

Standard (full-color) image compression: although conventionally used, compression of demosaicked video rather than CFA images directly can be counterproductive demosaicking triples the amount of data to be compressed by interpolating red, green and blue channels

CFA image display demosaicking compression

decompression

storage

a key task of color image compression is to decorrelate the interpolated color bands - essentially attempts to reverse the demosaicking process > disagreement with demosaicking

Raw camera image data compression: harder to be compressed due to the interlaced structure of red, green and blue samples only one third of the demosaicked image in size !! - this effectively triples the compression ratio if measured in terms of the size of the demosaicked image

CFA image display preprocessing (optional) demosaicking compression storage

Possible preprocessing for CFA image compression:

z(1,2) z(1,4) z(1,6) z(1,1) z(1,2) z(1,3) z(1,4) z(1,5) z(1,6) z(2,1) z(2,2) z(2,3) z(2,4) z(2,5) z(2,6) z(3,1) z(3,2) z(3,3) z(3,4) z(3,5) z(3,6) z(4,1) z(4,2) z(4,3) z(4,4) z(4,5) z(4,6) z(5,1) z(5,2) z(5,3) z(5,4) z(5,5) z(5,6) z(6,1) z(6,2) z(6,3) z(6,4) z(6,5) z(6,6) z(3,1) z(3,3) z(3,5) z(5,1) z(5,3) z(5,5) z(4,2) z(4,4) z(4,6) z(6,2) z(6,4) z(6,6) z(3,2) z(3,4) z(3,6) z(5,2) z(5,4) z(5,6) z(1,1) z(1,3) z(1,5) z(4,1) z(4,3) z(4,5) z(6,1) z(6,3) z(6,5) z(2,2) z(2,4) z(2,6) z(2,1) z(2,3) z(2,5)

decompression

CFA image compression: consumer electronics solutions constrained by low cost, throughput bottleneck and power conservation: compression and demosaicking done inexpensively but also suboptimally on camera high-end solutions: lossless or near-lossless coding, demosaicking performed off camera

may require preprocessing of the CFA image (spatial rearrangement of the CFA image data)

Digital rights management in digital cameras:

captured images are directly indexed in the single sensor digital camera, mobile phone and pocket device indexing performed by embedding metadata information suitable embedding techniques: secret sharing (cost-effective approach), watermarking (robust approach)

great importance to the end-users, database software programmers, and consumer electronics manufacturers

Indexing: performed in RB channel related CFA locations HVS is more sensitive to luminance which is composed primarily of green light

CFA data

registration

Embedding procedure

metadata CFA data encryption

or metadata indexed demosaicked image indexed CFA data

to authenticate, organize and retrieve images in personal digital databases

indexed CFA data demosaicking archive end-user

Extraction procedure

metadata extraction

encrypted metadata

vast number of captured images exponentially growing market with image-enabled consumer electronics

image retrieval and/or database organization digital image database

+ +

indexed data extraction indexed data extraction

Extraction tools: either the CFA images or the demosaicked images in personal image

databases using PC software commonly available by camera manufacturers

conventional public image database tools (e.g. World Wide Media eXchange - WWMX database)

Features: confidentiality must be provided for next generation camera applications online collaboration using PDAs or mobile phone video conferencing request for encryption combined with compression (large volume of image data) Needs for partial encryption: encrypt only part of a compressed bit-stream (preferably the most important information) to reduce computational complexity compression aids encryption by removing redundancy

Encryption: coder directs stream cipher to

encrypt certain bits after color image coding

Image C-SPIHT Coder

0 1 1 1 0 0...

Location of bits Bk,LIP-sig and Bk,LIS-sig

fE()

Stream Cipher Encryption Function

since coded bits are encrypted confidentiality vs. computational overhead can be controlled via the parameter

Stream Cipher Decryption Function

C-SPIHT Decoder

Location of bits Bk,LIP-sig and Bk,LIS-sig

information without decryption key

on the encryption mechanism and/or coder (JPEG, JPEG-2000, SPIHT) used

Video-demosaicking

Essential in single-sensor VIDEO cameras

motion video or image sequences represent a 3-D image signal or a time sequence of 2-D images (frames) motion video usually exhibits significant correlation in both the spatial and temporal sense by omitting the essential temporal characteristics, spatial processing methods, which process separately the individual frames, produce an output image sequence with motion artifacts

Video-Demosaicking

Practical solutions

spectral model used to eliminate color shifts and artifacts edge-sensing mechanism used to eliminate edge-blurring and to produce sharply-looking fine details spatiotemporal window placed over three consecutive frames to reduce motion artifacts

Edge-sensing mechanism Spectral model

Processing windows:

x* x* x*

q t p

temporal

spatial

spatiotemporal

Spectral quantities in G plane demosaicking

c( r 1, s ,t + i ) = x( r 1, s ,t + i )2 ( x( r , s ,t + i ) k + x( r 2, s ,t + i ) k ) / 2 c( r , s 1,t + i ) = x( r , s 1,t + i )2 ( x( r , s ,t + i ) k + x( r , s 2,t + i ) k ) / 2 c( r , s +1,t + i ) = x( r , s +1,t + i )2 ( x( r , s ,t + i ) k + x( r , s + 2,t + i ) k ) / 2 c( r +1, s ,t + i ) = x( r +1, s ,t + i )2 ( x( r , s ,t + i ) k + x( r + 2, s ,t + i ) k ) / 2

Spatiotemporal video-demosaicking

Fast video-demosaicking procedure

usage in PDAs and mobile phone imaging applications utilization of multistage unidirectional spatiotemporal filtering concepts

to overcome the lack of spectral information (r,s) denotes spatial location, t denotes frame index three frames: past (i=-1), actual (i=0), future (i=1)

x( r , s ,t )2 = x( r , s ,t ) k + {w j c j }/ w j

j =1 j =1 N N

x( r , s ,t ) k = x( r , s ,t )2 + {w j c j }/ w j

j =1 j =1

essential spectral quantities formed over the spatiotemporal neighborhood structural content followed by spatiotemporal edge-sensing weights color component to be outputted is obtained via weighted average operations defined over unidirectional demosaicked values

G plane demosaicking

configurations used in G as well as R and B plane demosaicking configurations used in R and B plane demosaicking

configurations used in G as well as R and B plane demosaicking configurations used in R and B plane demosaicking

(a)

(b)

(c)

(a)

(b)

(c)

(a)

(b)

(c)

(d)

(a)

(b)

(c)

(d)

w1 = (1+ | c( r 1, s ,t 1) c( r +1, s ,t +1) | + | c( r 1, s ,t ) c( r +1, s ,t ) |) 1 w2 = (1+ | c( r , s 1,t 1) c( r , s +1,t +1) | + | c( r , s 1,t ) c( r , s +1,t ) |) 1

(d) (e) (f) (d) (e) (f)

c2 = (c( r , s 1,t 1) + c( r , s +1,t +1) + c( r , s 1,t ) + c( r , s +1,t ) ) / 4 c3 = (c( r , s +1,t 1) + c( r , s 1,t +1) + c( r , s +1,t ) + c( r , s 1,t ) ) / 4 c4 = (c( r +1, s ,t 1) + c( r 1, s ,t +1) + c( r +1, s ,t ) + c( r 1, s ,t ) ) / 4

w1 = 1/(1+ | c( r 1, s ,t 1) c( r +1, s ,t +1) |) w2 = 1/(1+ | c( r , s 1,t 1) c( r , s +1,t +1) |) w3 = 1/(1+ | c( r , s +1,t 1) c( r , s 1,t +1) |) w4 = 1/(1+ | c( r +1, s ,t 1) c( r 1, s ,t +1) |) w5 = 1/(1+ | c( r 1, s ,t ) c( r +1, s ,t ) |) w6 = 1/(1+ | c( r , s 1,t ) c( r , s +1,t ) |)

c2 = (c( r , s 1,t 1) + c( r , s +1,t +1) ) / 2 c3 = (c( r , s +1,t 1) + c( r , s 1,t +1) ) / 2 c4 = (c( r +1, s ,t 1) + c( r 1, s ,t +1) ) / 2 c5 = (c( r 1, s ,t ) + c( r +1, s ,t ) ) / 2 c6 = (c( r , s 1,t ) + c( r , s +1,t ) ) / 2

w3 = (1+ | c( r , s +1,t 1) c( r , s 1,t +1) | + | c( r , s +1,t ) c( r , s 1,t ) |) 1 w4 = (1+ | c( r +1, s ,t 1) c( r 1, s ,t +1) | + | c( r +1, s ,t ) c( r 1, s ,t ) |) 1

c( r 1, s 1,t + i ) = x( r 1, s 1,t + i ) k x( r 1, s 1,t + i )2 c( r 1, s +1,t + i ) = x( r 1, s +1,t + i ) k x( r 1, s +1,t + i )2 c( r +1, s 1,t + i ) = x( r +1, s 1,t + i ) k x( r +1, s 1,t + i )2 c( r +1, s +1,t + i ) = x( r +1, s +1,t + i ) k x( r +1, s +1,t + i )2

Evaluation procedure

original video sequence

t = K3 t = 62 t=2 t =1

K1

(a)

(b)

K2

K3

demosaicking

(c) (a)

processing error

restored video sequence

Objective criteria

Mean absolute error (MAE)

MAE = 1 kK1 K 2 K 3

Video-demosaicking

o

r =1 s =1 t =1 k =1

K1

K2

K3

( r , s ,t ) k

x( r , s ,t ) k

MSE = 1 kK1 K 2 K 3

( o

3 r =1 s =1 t =1 k =1

K1

K2

K3

( r , s ,t ) k

x( r , s ,t ) k )

K1 K 2 K3

NCD =

( o

3 r =1 s =1 t =1 k =1 K1 K 2 K3 r =1 s =1 t =1

( r , s ,t ) k 3

x( r , s ,t ) k )

( o

k =1

( r , s ,t ) k

MAE and MSE are defined in RGB space NCD defined in perceptually uniform CIE LUV space

original frames

Video-demosaicking

Experimental results

(a)

(b)

original frames

(c)

(d)

(a) original image, (b) bilinear demosaicking, (c) unidirectional demosaicking, (d) bidirectional demosaicking

Experimental results

Experimental results

(a)

(b)

(a)

(b)

(c)

(d)

(c)

(d)

(a) original image, (b) bilinear demosaicking, (c) unidirectional demosaicking, (d) bidirectional demosaicking

(a) original image, (b) bilinear demosaicking, (c) unidirectional demosaicking, (d) bidirectional demosaicking

OBJECTIVE EVALUATION OF VIDEO DEMOSAICKING Test color video Cost Method bilinear unidirectional bidirectional bilinear unidirectional bidirectional bilinear unidirectional bidirectional MAE 6.640 2.660 2.519 6.498 1.986 1.912 10.238 3.369 3.035 MSE 177.5 24.6 21.4 174.5 14.7 13.4 355.1 37.4 28.7 NCD 0.0994 0.0439 0.0421 0.1103 0.0439 0.0424 0.1448 0.0545 0.0503

Conclusions

Bikes

Nature

THE NUMBER OF NORMALIZING OPERATIONS PER SPATIAL LOCATION Method Criterion Additions Subtractions Multiplicat. Divisions Abs. values unidirectional Step 1 35 18 6 25 6 Step 2 23 18 6 13 6 Total 58 36 12 38 12 Step 1 43 20 6 21 8 bidirectional Step 2 31 20 6 9 8 Total 74 40 12 30 16

Conclusions

R. Lukac, B. Smolka, K. Martin, K.N. Plataniotis, and A.N. Venetsanopoulos, "Vector Filtering for Color Imaging," IEEE Signal Processing Magazine, vol. 22, no. 1, pp. 74-86, January 2005.

R. Lukac and K.N. Plataniotis, "Fast Video Demosaicking Solution for Mobile Phone Imaging Applications," IEEE Transactions on Consumer Electronics, vol. 51, no. 2, pp. 675-681, May 2005.

R. Lukac and K.N. Plataniotis, "DataAdaptive Filters for Demosaicking: A Framework," IEEE Transactions on Consumer Electronics, vol. 51, no. 2, pp. 560-570, May 2005.

R. Lukac, K. Martin, and K.N. Plataniotis, "Demosaicked Image Postprocessing Using Local Color Ratios," IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 6, pp. 914-920, June 2004.

R. Lukac and K.N. Plataniotis, "Normalized Color-Ratio Modelling for CFA Interpolation," IEEE Transactions on Consumer Electronics, vol. 50, no. 2, pp. 737-745, May 2004.

R. Lukac, K.N. Plataniotis, and D. Hatzinakos, "Color Image Zooming on the Bayer Pattern," IEEE Transactions on Circuits and Systems for Video Technology, to appear, vol. 15, 2005.

Color Image Processing:

METHODS & APPLICATIONS

R. Lukac and K.N. Plataniotis, Color Image Processing: Methods & Applications, CRC Press, September 2006

- AVC307R ManualUploaded byManuel Pezoa Lacoma
- MIE1622H - Assignment 3Uploaded byRicardo P. Burga
- [1]Uploaded bySanthosh Veeramalla
- Gonzalez and Woods Digital Image Processing 3rd EditionUploaded byMonica
- Itc Unit IIIUploaded bySanthosh Kumar
- Intro Color DemoUploaded byfarha19
- Image CompressionUploaded byamt12202
- Final image compression based fuzzyUploaded byPradeepdarshan Pradeep
- 60802 BloomingUploaded byasliman
- 02Uploaded byDo Le Quoc
- ReadmeUploaded byboardon24
- BME 3111 S#5 Data Reduction Techniques.pptxUploaded byMuhammad Muinul Islam
- Unit1_1 (1)Uploaded byAnuj Bodhe
- B.E._ETC.pdfUploaded bySunera
- digikamUploaded byScribdOld
- Date PreProcessingUploaded bykuldeep
- SaranyaUploaded bySharmila Sukumar
- Computer Network SolUploaded bytycheng96
- g 03113844Uploaded byIOSRJEN : hard copy, certificates, Call for Papers 2013, publishing of journal
- Using Triangular Function To Improve Size Of Population In Quantum Evolution Algorithm For Fractal Image Compression[Uploaded byAnonymous IlrQK9Hu
- Lossless Compression and Information Hiding in Images Using SteganographyUploaded bySreeni Champ
- LTO4-20020140-001Uploaded byMariano Mon
- History of PixelsUploaded byrakeshmehla
- 7AUploaded bygrewal1988
- testreview (1)Uploaded byAzima Khan
- Stegnography Project ReportUploaded byPooja Giri
- sp500-245-a16Uploaded byZuke Zainbar
- 2009Uploaded bytariq76
- SpeckUploaded byKiran Ratna
- Compression and DecompressionUploaded byKalyan Nakkina

- Hitachi HDC 571EUploaded byRynomata
- Corporate Profile of Olympus GroupUploaded byTanveer Tushar
- polaroid a300Uploaded byEspiritu Latino
- Computer Operations & Packages Notes Part 2Uploaded byMakaha R.
- OITDA 2008Uploaded byHaruko Semprez
- sensors-17-00730Uploaded byCrăciun Andreea
- com 1275 unit planUploaded byapi-228130677
- ILL 133 Module 01Uploaded byJesus Flores
- Polaroid PoGo Instant Mobile Printer Manual GBUploaded byAle Uriarte
- Ip Op DevicesUploaded by24sandeep24
- Input DevicesUploaded bygabrielle
- Digital Infrared Photography ManualUploaded byOl Reb
- Polaroid t737 ManualUploaded byMariana Ruiz Sanchez
- Digital Infrared Photography.pdfUploaded byAnibal Mercado De la Fuente
- 1.4 - Clarry (Paper)Uploaded byrssarin
- 3rd Sem_21_Multi Media TechUploaded byMd Intaj Ali
- camera.docUploaded byKing Krish
- Euromonitor - Consumer Electronics Global Trends and Analysis - Sep 2012Uploaded byMatt Ovenden
- Canon - Competitive Advantage & Generic Strategies MatrixUploaded byAtiqah Ismail
- MT9T031_DS_EUploaded bymarkos_antonyo
- ad9ec4ae0fbdbb1f44546d3a474e0e87Uploaded byDiana Popescu
- CHDK for Dummies - CHDK WikiUploaded byRetno Danu
- FLIR B250 (9 Hz, 2010 model) (2)Uploaded bytfemilian
- Spec of Dome DS 2CD2121G0 IUploaded bykakyo29433
- GR-D70UUploaded byRenatoMaia
- Lesson 1 DP Class Chapter 1.docxUploaded byMichael Muhammad
- A Mechatronic Marvel: The Digital CameraUploaded byAdin Softić
- Scorpian 1Uploaded byGabrielleYoung
- A Glossary of Digital Photography Terms _ ExploraUploaded byrahil_sang
- Digital Camera World 2010 01Uploaded byIner Gen