You are on page 1of 24

AZAD TECHNICAL CAMPUS

Department of Electronic instrument and control


Engineering
Session 2015-16

SEMINAR REPORT

ON

DIGITAL IMAGE PROCESSING


For the partial fulfillment of M.TECH (EIC SEM -3RD )

With specialization :- Instrumentation and Control


System

Submitted by :Manish Mishra


Roll No-1405322503

Under the supervision


of
Mr. Faisal Hasim
Asstt. Professor

CERTIFICATE

Certified that seminar entitled DIGITAL IMAGE


PROCESSING Submitted by MANISH MISHRA to the
department of Electronics and Instrumentation and control
Engineering , Azad Technical Campus, Luck now , A.P.J Abdul
Kalam University, Lucknow in partial fulfillment of
requirement for the M.tech 3rd Semester in Electronic
Instrumentation and control Engineering , is a bonafide
record of the candidates own work carried out by his under
my supervision and guidance .The matter embodied in this
project has not been submitted for the award of any other
degree or diploma .

Mr. Faisal Hasim


Sanawer Alam
Seminar In charge
Dated :-

Mr. Md.
H.O.D

ACKNOWLEDGEMENT

One of the most pleasant aspects of doing this project is the


opportunity to thank those who have contributed directly or
indirectly to it.

I am grateful to Mr. Md. Sanawer Alam Head of the


Electronics Instrumentation & Control Engineering
Department, AIET , Luck now for his kind support and
encouragement .

I am greatful to my course coordinator and Seminar in


charge , Mr.Faisal Hasim for his guidance , valuable
suggestion and whole hearted support that has lead to
complete the report successfully .

I am also very thankful to other staff members of


Department for their endless Support, encouragement and
love . Finally , I express my sincere thanks to all those who
helped me in completing this report .
Manish Mishra

Roll No-1405322503

ABSTRACT

From 2000 through 2016, the digital imaging era almost completely
replaced a magnificent silver halide technology as the de facto means of
capturing, storing, and viewing images. The motion picture industry is well
on its way to replacing film despite some of its superior imaging
characteristics. What probably few saw was the complete dominance of the
cell phone camera where the combination of a smart phone linked to the
global network and many application packages to ease the total navigation
of our complex world and a camera that took reasonably good images. This
transformation over such a short period of time is a testimony of the power
of technology and marketing. In this chapter, the focus will be on the digital
camera, be it a simple point-and-shoot, a sophisticated single-lens reflex
digital camera with 24-million pixels, or large format camera with untold
pixels. All the cameras are based on an imaging sensor, CCD or CMOS,
technology. Each camera has to find ways to prevent color aliasing
introduced by the need of a color filter array, CFA, improving color
reproduction, adjusting for different illuminants, reducing noise, and
improving the sharpness of the final image. There are no limits to the
technologies incorporated into all digital cameras including face detection,
automatic red eye removal, and stabilizing mechanisms that remove jitter
from camera motion. Here, the focus will be six the major aspects of digital
imaging: sampling and aliasing, image sharpness and enhancement, noise
in digital cameras, noise reduction methods within and outside of the
camera, exposure latitude, and, finally, the concept of ISO speed for a
camera and a sensor.

Content
What is Image Processing?
Introduction of Image Processing
History
Basic Concepts of Image Processing
Image functions
Digital Image properties
Topological Image properties
Purpose of Image Processing
Types of Image Processing
Current Research Image Processing

Future Scope
Applications
Advantages
Disadvantages
Conclusion
Reference

What is Image Processing?


Image processing is a method to convert an image into digital form and perform
some operations on it, in order to get an enhanced image or to extract some useful
information from it. It is a type of signal dispensation in which input is image, like
video frame or photograph and output may be image or characteristics associated
with that image. Usually Image Processing system includes treating images as two
dimensional signals while applying already set signal processing methods to them.
It is among rapidly growing technologies today, with its applications in various
aspects of a business. Image Processing forms core research area within
engineering and computer science disciplines too.
Image processing basically includes the following three steps.
Importing the image with optical scanner or by digital photography.
Analyzing and manipulating the image which includes data compression and
image enhancement and spotting patterns that are not to human eyes like satellite
photographs.

Output is the last stage in which result can be altered image or


report that is based on image analysis.

Introduction to Digital Image Processing:


Vision allows humans to perceive and understand the world surrounding us.
Computer vision aims to duplicate the effect of human vision by electronically
perceiving and understanding an image.
Giving computers the ability to see is not an easy task - we live in a three
dimensional (3D) world, and when computers try to analyze objects in 3D space,
available visual sensors (e.g., TV cameras) usually give two dimensional (2D)
images, and this projection to a lower number of dimensions incurs an enormous
loss of information.
In order to simplify the task of computer vision understanding, two levels are
usually distinguished; low-level image processing and high level image
understanding.
Usually very little knowledge about the content of images.
High level processing is based on knowledge, goals, and plans of how to achieve
those goals. Artificial intelligence (AI) methods are used in many cases. Highlevel

computer vision tries to imitate human cognition and the ability to make decisions
according to the information contained in the image.
This course deals almost exclusively with low-level image processing, high level
in which is a continuation of this course.
Age processing is discussed in the course Image Analysis and Understanding,
which is a continuation of this course.

History:
Many of the techniques of digital image processing, or digital picture processing as
it was often called, were developed in the 1960s at the Jet Propulsion Laboratory,
MIT, Bell Labs, University of Maryland, and few other places, with application to
satellite imagery, wire photo standards conversion, medical imaging, videophone,
character recognition, and photo enhancement. But the cost of processing was
fairly high with the computing equipment of that era. In the 1970s, digital image
processing proliferated, when cheaper computers Creating a film or electronic
image of any picture or paper form. It is accomplished by scanning or
photographing an object and turning it into a matrix of dots (bitmap), the meaning
of which is unknown to the computer, only to the human viewer. Scanned images
of text may be encoded into computer data (ASCII or EBCDIC) with page
recognition software (OCR).

Basic Concepts:
A signal is a function depending on some variable with physical meaning.
Signals can be o One-dimensional (e.g., dependent on time)
Two-dimensional (e.g., images dependent on two co-ordinates in a plane),
Three-dimensional (e.g., describing an object in space), o Or higher dimensional.

Image functions:
The image can be modeled by a continuous function of two or three variables;
Arguments are co-ordinates x, y in a plane, while if images change in time a
third variable t might be added.
The image function values correspond to the brightness at image points.
The function value can express other physical quantities as well (temperature,
pressure distribution, distance from the observer, etc.).
The brightness integrates different optical quantities - using brightness as a basic
quantity allows us to avoid the description of the very complicated process of
image formation.
The image on the human eye retina or on a TV camera sensor is intrinsically 2D.
We shall call such a 2D image bearing information about brightness points an
intensity image.

The real world, which surrounds us, is intrinsically 3D.


The 2D intensity image is the result of a perspective projection of the 3D scene.
When 3D objects are mapped into the camera plane by perspective projection a
lot of information disappears as such a transformation is not one-to-one.
Recognizing or reconstructing objects in a 3D scene from one image is an
illposed problem.
Recovering information lost by perspective projection is only one, mainly
geometric, problem of computer vision.
The second problem is how to understand image brightness. The only
information available in an intensity image is brightness of the appropriate pixel,
which is dependent on a number of independent factors such as o Object surface
reflectance properties (given by the surface material, microstructure and marking),
o Illumination properties, o And object surface orientation with respect to a viewer
and light source.

Digital image properties:


Metric properties of digital images:
Distance is an important example.
The distance between two pixels in a digital image is a significant quantitative
measure.
The Euclidean distance is defined by Euclidean distance equation
Pixel adjacency is another important concept in digital images.
4-neighborhood
8-neighborhood

It will become necessary to consider important sets consisting of several adjacent


pixels -- regions.
Region is a contiguous set.

Topological properties of digital images


Topological properties of images are invariant to rubber sheet transformations.
Stretching does not change contiguity of the object parts and does not change the
number One such image property is the Euler--Poincare characteristic defined as
the difference between the number of regions and the number of holes in them.
Convex hull is used to describe topological properties of objects.
r of holes in regions.
The convex hull is the smallest region which contains the object, such that any
two points of the region can be connected by a straight line, all points of which
belong to the region.

Purpose of Image processing


The purpose of image processing is divided into 5 groups. They are:
1. Visualization - Observe the objects that are not visible.
2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Measurement of pattern Measures various objects in an image.
5. Image Recognition Distinguish the objects in an image.

Types of Image Processing


The two types of methods used for Image Processing are Analog and
Digital Image Processing. Analog or visual techniques of image
processing can be used for the hard copies like printouts and
photographs.
Image analysts use various fundamentals of interpretation while using
these visual techniques. The image processing is not just confined to
area that has to be studied but on knowledge of analyst. Association is
another important tool in image processing through visual techniques.
So analysts apply a combination of personal knowledge and collateral
data to image processing.

Digital Processing techniques help in manipulation of the digital images


by using computers. As raw data from imaging sensors from satellite
platform contains deficiencies.
To get over such flaws and to get originality of
information, it has to undergo various phases of
processing. The three general phases that all types of
data have to undergo while using digital technique are
Pre- processing, enhancement and display, information
extraction.

Current Research
A wide research is being done in the Image processing technique.
1. Cancer Imaging Different tools such as PET, MRI, and Computer aided
Detection helps to diagnose and be aware of the tumour.
2. Brain Imaging Focuses on the normal and abnormal development of brain,
brain ageing and common disease states.
3. Image processing This research incorporates structural and functional MRI
in neurology, analysis of bone shape and structure, development of
functional imaging tools in oncology, and PET image processing software
development.
4. Imaging Technology Development in image technology have formed the
requirement to establish whether new technologies are effective and cost
beneficial. This technology works under the following areas:
Magnetic resonance imaging of the knee
Computer aided detection in mammography
Endoscopic ultrasound in staging the oesophageal cancer
Magnetic resonance imaging in low back pain
Ophthalmic Imaging This works under two categories:

5. Development of automated software- Analyzes the retinal images to show


early sign of diabetic retinopathy
6. Development of instrumentation Concentrates on development of scanning
laser ophthalmoscope

Future
We all are in midst of revolution ignited by fast development in computer
technology and imaging. Against common belief, computers are not able to match
humans in calculation related to image processing and analysis. But with
increasing sophistication and power of the modern computing, computation will go
beyond conventional, Von Neumann sequential architecture and would
contemplate the optical execution too. Parallel and distributed computing
paradigms are anticipated to improve responses for the image processing results.

Applications of image processing


1.Computer VisionComputer vision is the science and technology of machines that see. As a
scientific discipline, computer vision is concerned with the theory for building
artificial systems that obtain information from images. The image data can take
many forms, such as a video sequence, views from multiple cameras, or multidimensional data from a medical scanner.As a technological discipline, computer
vision seeks to apply the theories and models of computer vision to the
construction of computer vision systems. Computer vision can also be described as
a complement (but not necessarily the opposite) of biological vision. In biological
vision, the visual perception of humans and various animals are studied, resulting
in models of how these systems operate in terms of physiological processes.
Computer vision, on the other hand, studies and describes artificial vision system
that are implemented in software and/or hardware. Interdisciplinary exchange
between biological and computer vision has proven increasingly fruitful for both
fields.

2. Face DetectionFace detection is a computer technology that determines the locations and sizes of
human faces in arbitrary (digital) images. It detects facial features and ignores
anything else, such as buildings, trees and bodies. Face detection can be regarded
as a specific case of object-class detection; In object-class detection, the task is to
find the locations and sizes of all objects in an image that belong to a given class.
Face detection can be regarded as a more general case of face localization; In face
localization, the task is to find the locations and sizes of a known number of faces
(usually one). In face detection, one does not have this additional information.

Examples include upper torsos, pedestrians, and cars. Face detection is used in
biometrics, often as a part of (or together with) a facial recognition system. It is
also used in video surveillance, human computer interface and image database
management. Some recent digital cameras use face detection for autofocus[1].
Also, face detection is useful for selecting regions of interest in photo slideshows
that use a pan-and-scale Ken Burns effect.

3.Remote SensingRemote sensing is the small or large-scale acquisition of information of an object


or phenomenon, by the use of either recording or real-time sensing device(s) that is
not in physical or intimate contact with the object (such as by way of aircraft,
spacecraft, satellite, buoy, or ship). In practice, remote sensing is the stand-off
collection through the use of a variety of devices for gathering information on a
given object or area. Thus, Earth observation or weather satellite collection
platforms, ocean and atmospheric observing weather buoy platforms, monitoring
of a pregnancy via ultrasound, Magnetic Resonance Imaging (MRI), Positron
Emission Tomography (PET), and space probes are all examples of remote
sensing. In modern usage, the term generally refers to the use of imaging sensor
technologies including but not limited to the use of instruments aboard aircraft and
spacecraft, and is distinct from other imaging-related fields such as medical
imaging.

4. Medical imaging
Medical imaging refers to the techniques and processes used to create images of
the human body (or parts thereof) for clinical purposes (medical procedures
seeking to reveal, diagnose or examine disease) or medical science (including the
study of normal anatomy and physiology).As a discipline and in its widest sense, it
is part of biological imaging and incorporates radiology (in the wider sense),
radiological sciences, endoscopy, (medical) thermography, medical photography
and microscopy (e.g. for human pathological investigations). Medical imaging is
often perceived to designate the set of techniques that noninvasively produce
images of the internal aspect of the body. In this restricted sense, medical imaging
can be seen as the solution of mathematical inverse problems. This means that

cause (the properties of living tissue) is inferred from effect (the observed signal).
In the case of ultrasonography the probe consists of ultrasonic pressure waves and
echoes inside the tissue show the internal structure. In the case of projection
radiography, the probe is X-ray radiation which is absorbed at different rates in
different tissue types such as bone, muscle and fat.

5.Microscope image processingMicroscope image processing is a broad term that covers the use of digital image
processing techniques to process, analyze and present images obtained from a
microscope. Such processing is now commonplace in a number of diverse fields
such as medicine, biological research, cancer research, drug testing, metallurgy,
etc. A number of manufacturers of microscopes now specifically design in features
that allow the microscopes to interface to an image processing system. Until the
early 1990s, most image acquisition in video microscopy applications was
typically done with an analog video camera, often simply closed circuit TV
cameras. While this required the use of a frame grabber to digitize the images,
video cameras provided images at full video frame rate (25-30 frames per second)
allowing live video recording and processing. While the advent of solid state
detectors yielded several advantages, the real-time video camera was actually
superior in many respects.

6.Lane departure warning systemIn road-transport terminology, a lane departure warning system is a mechanism
designed to warn a driver when the vehicle begins to move out of its lane (unless a
turn signal is on in that direction) on freeways and arterial roads.The first
production lane departure warning system in Europe was the system developed by
Iteris for Mercedes Actros commercial trucks. The system debuted in 2000 and is
now available on most trucks sold in Europe. In 2002, the Iteris system became
available on Freightliner Trucks' trucks in North America. In all of these systems,
the driver is warned of unintentional lane departures by an audible rumble strip
sound generated on the side of the vehicle drifting out of the lane. If a turn signal is
used, no warnings are generated.

7. Mathematical morphologyMathematical morphology (MM) is a theory and technique for the analysis and
processing of geometrical structures, based on set theory, lattice theory, topology,
and random functions. MM is most commonly applied to digital images, but it can
be employed as well on graphs, surface meshes, solids, and many other spatial
structures. Topological and geometrical continuous-space concepts such as size,
shape, convexity, connectivity, and geodesic distance, can be characterized by MM
on both continuous and discrete spaces. MM is also the foundation of
morphological image processing, which consists of a set of operators that
transform images according to the above characterizations. MM was originally
developed for binary images, and was later extended to grayscale functions and
images. The subsequent generalization to complete lattices is widely accepted
today as MM's theoretical foundation.

Advantages
This one is more accurate than the overlapping method because it is based upon minutia.
It is an interactive method for recognizing fingerprints.

Disadvantages
It is more time consuming as compared to the former.
More complex program.

CONCLUSION

Using image processing techniques, we can sharpen the images,


contrast to make a graphic display more useful for display, reduce
amount of memory requirement for storing image information,
etc., due to such techniques, image processing is applied in
recognition of images as in factory floor quality assurance
systems; image enhancement, as in satellite reconnaissance
systems;image synthesis as in law enforcement suspect
identification systems, and image construction as in plastic
surgery design systems

References s
R. Gonzalez and R. Woods, Digital Image Processing 2nd Edition,
Prentice Hall, 2002

C. Garcia et al., Face Detection in Color Images Using Wavelet Packet


Analysis.

Machine Vision: Automated Visual Inspection and Robot Vision, David


Vernon, Prentice Hall, 1991

You might also like