You are on page 1of 5

CHAPTER 1

INTRODUCTION
1.1 General Introduction
Images are the most common and convenient means of conveying or transmitting information as it is worth of a thousand words. Single sensor digital color cameras capture images with a color filter array (CFA), such as the Bayer pattern CFA. At each pixel, only one of the three primary colors (red, green and blue) is sampled, the missing color samples are estimated by a process called color demosaicking (CDM). Demosaicking is the process by which from a matrix of colored pixels measuring only one color component per pixel, red, green or blue, one can infer whole color information at each pixel. Demosaicing is part of the processing pipeline required to render these images into a viewable format. In this project, we propose to couple local directional interpolation (LDI) with nonlocal enhancement for a more effective CDM.

1.1.1 Characteristics of Color Demosaicking


Primary-consistent soft-decision color demosaicking for digital cameras In the PCSD framework, we make multiple estimates of a missing color sample under different hypotheses on edge or texture directions. The estimates are made via a primary consistent interpolation, meaning that all three primary components of a color are interpolated in the same direction. PCSD approach can significantly improve the image quality of digital cameras in both subjective and objective measures.

Edge Adaptive Color Demosaicking Based on the Spatial Correlation of the Bayer Color Difference An edge adaptive color demosaicking algorithm that classifies the region types and estimates the edge direction on the Bayer color filter array (CFA) samples. To improve

the image quality with the consistent edge direction, we classify the region of an image into three different types, such as edge, edge pattern, and flat regions. Based on the region types, this method estimates the edge direction adaptive to the regions. As a result, this method reconstructs clear edges with reduced visual distortions in the edge and the edge pattern regions.
Demosaicking using Vector Spectral Model

Using the vector spectral model, the solution preserves the magnitude and the directional characteristics of the single-sensor captured image in both smooth and edge areas.

Reduction of Color Artifacts Using Inverse Demosaicking


The inverse demosaicking refers to the recovery of the single image sensor values from the full color image. Early digital cameras using primitive demosaicking algorithms to produce a full color image have resulted in inferior quality images with color artifacts. Generally, the removal of those artifacts is not achievable by the application of direct filtering. If we can recover the actual image sensor values from a full color image and redemocratic it again using state-of-the-art recently developed demosaicking algorithms, a better image can be produced without filtering. Using wavelet transform is proposed to inverse demosaicking a full color image in order to recover the actual sensor values. It is then re-demosaicked using an advanced recently developed demosaicking method to reproduce an output image with minimal color artifacts.

1.1.2 Concept
A more economical and practical way to record the primary colors is to permanently place a filter called a color filter array over each individual photosite. By breaking up the sensor into a variety of red, blue and green pixels, it is possible to get enough information in the general vicinity of each sensor to make very accurate guesses about the true color at that

location. This process of looking at the other pixels in the neighborhood of a sensor and making an educated guess is called interpolation The most common pattern of filters is the Bayer filter pattern. This pattern alternates a row of red and green filters with a row of blue and green filters. The pixels are not evenly divided -there are as many green pixels as there are blue and red combined. This is because the human eye is not equally sensitive to all three colors. It's necessary to include more information from the green pixels in order to create an image that the eye will perceive as a "true color."

The advantages of this method are that only one sensor is required, and all the color information (red, green and blue) is recorded at the same moment. That means the camera can be smaller, cheaper, and useful in a wider variety of situations. The raw output from a sensor with a Bayer filter is a mosaic of red, green and blue pixels of different intensity. Digital cameras use specialized demosaicing algorithms to convert this mosaic into an equally sized mosaic of true colors. The key is that each colored pixel can be used more than

once. The true color of a single pixel can be determined by averaging the values from the closest surrounding pixels. In a CFA image, spatial and chromatic information are mixed together, as only one color sensitivity is available at each pixel. For this reason, demosaicing will always involve a trade-off between spatial resolution reconstruction and color rendering.

1.2 Problem statement


For images with sharp color transitions and high color saturation, present algorithms are very complex in computation and results in very poor quality images. So a simple technique which has superior performance and produces better quality images than the current ones are required. The proposed method should be able to enhance later.

1.3 Objective
The objectives of this project is Maximum preservation of the image resolution Low computational complexity for fast processing or efficient in-camera hardware implementation Amenability to analysis for accurate noise reduction Avoidance of the introduction of false color artifacts, such as chromatic aliases.

1.4 Scope
This project is implemented to reconstruct a full color image (i.e. a full set of color triples) from the spatially under sampled color channels output from the CFA. The project exploits the image non-local redundancy to improve the local color reproduction result. The application works on windows operating system. For working of the software, MatLab 7.1 or later version is needed.

You might also like