You are on page 1of 11

AUTHORS:

p.nikithareddy
ECE-3/4
GNITS
swathisharma
ECE-3/4
GNITS

to be able to detect concealed weapons


from a standoff distance, especially
when it is impossible to arrange the flow
ABSTRACT
of people through a controlled
procedure.
In the present scenario, bomb blasts are
rampant all around the world. Bombs
went of in buses and underground
stations, killed many and left many
injured. Bomb blasts can not be
INTRODUCTION:
predicted before hand. This paper is all
about the technology which predicts the
Till now the detection of concealed
suicide bombers and explosion of
weapons is done by manual screening
weapons through IMAGING FOR
procedures. To control the explosives in
CONCLEAD WEAPON DETECTION,
some places like airports, sensitive
the sensor improvements, how the
buildings, famous constructions etc. But
imaging takes place and the challenges.
these manual screening procedures are
And we also describe techniques for
not giving satisfactory results, because
simultaneous noise suppression, object
this type of manual screenings
enhancement of video data and show
procedures screens the person when the
some mathematical results.
person is near the screening machine and
also some times it gives wrong alarm
indications so we are need of a
The detection of weapons
technology that almost detects the
concealed underneath a person’s
weapon by scanning. This can be
clothing is very much important to the
achieved by imaging for concealed
improvement of the security of the
weapons.
general public as well as the safety of
public assets like airports, buildings,
The goal is the eventual
and railway stations etc. Manual
deployment of automatic detection and
screening procedures for detecting
recognition of concealed weapons. it is a
concealed weapons such as handguns,
technological challenge that requires
knives, and explosives are common in
innovative solutions in sensor
controlled access settings like airports,
technologies and image processing.
entrances to sensitive buildings and
public events. It is desirable sometimes
The problem also presents Passive millimeter wave
challenges in the legal arena; a number (MMW) sensors measure the
of sensors based on different apparent temperature through
phenomenology as well as image the energy that is emitted or
processing support are being developed reflected by sources. The output
to observe objects underneath people’s of the sensors is a function of
clothing. the emissive of the objects in
the MMW spectrum as
IMAGING SENSORS measured by the receiver.
Clothing penetration for
These imaging sensors developed for concealed weapon detection is
CWD applications depending on their made possible by MMW sensors
portability, proximity and whether they due to the low emissive and
use active or passive illuminations. high reflectivity of objects like
metallic guns. In early 1995, the
MMW
1. INFRARED IMAGER: data were obtained by means of
scans using a single detector
Infrared imagers utilize that took up to 90 minutes to
the temperature distribution generate one image.
information of the target to form Following figure1 (a) shows a
an image. Normally they are visual image of a person
used for a variety of night-vision wearing a heavy sweater that
applications, such as viewing conceals two guns made with
vehicles and people. The metal and ceramics. The
underlying theory is that the corresponding 94-GHz
infrared radiation emitted by the radiometric image figure1 (b)
human body is absorbed by was obtained by scanning a
clothing and then re-emitted by single detector across the object
it. As a result, infrared radiation plane using a mechanical
can be used to show the image scanner. The radiometric image
of a concealed weapon only clearly shows both firearms.
when the clothing is tight, thin,
and stationary. For normally
loose clothing, the emitted
infrared radiation will be spread
over a larger clothing area, thus
decreasing the ability to image
a weapon.

2. P M W IMAGING
SENSORS:

FIRST GENERATION:
FIGURE 2(a) visual image 2(b) second-
generation image of a person concealing
Figure: 1(a) visible and 1(b) MMW
a handgun beneath a jacket.
image of a person concealing 2 guns
beneath a heavy sweater
CWD THROUGH IMAGE FUSION:
By fusing passive MMW image
SECOND GENERATION:
Recent advances in MMW data and its corresponding
sensor technology have led to infrared (IR) or electro-optical
video-rate (30 frames/s) MMW (EO) image, more complete
cameras .One such camera is information can be obtained.
the pupil-plane array from Terex The information can then be
Enterprises. It is a 94-GHz utilized to facilitate concealed
radiometric pupil-plane imaging weapon detection.
system that employs frequency
scanning to achieve vertical Fusion of an IR image
resolution and uses an array of revealing a concealed weapon
32 individual wave-guide and its corresponding MMW
antennas for Horizontal image has been shown to
resolution. This system collects facilitate extraction of the
up to 30 frames/s of MMW data. concealed weapon. This is
Following figure shows the illustrated in the example given
visible and second-generation in following figure 3a) Shows an
MMW images of an individual image taken from a regular CCD
hiding a gun underneath his camera, and Figure3b) shows a
jacket. It is clear from the corresponding MMW image. If
figures 1(b), 2(b) that the image either one of these two images
quality of the camera is alone is presented to a human
degraded. operator, it is difficult to
recognize the weapon concealed
underneath the rightmost
person’s clothing. If a fused
image as shown in Figure 3c) is
presented, a human operator is
able to respond with higher An image processing
accuracy. This demonstrates the architecture for CWD is shown in
benefits of image fusion for the Figure 4.The input can be multi
CWD application, which sensor (i.e., MMW + IR, MMW +
integrates complementary EO, or MMW + IR + EO) data or
information from multiple types only the MMW data. In the latter
of sensors. case, the blocks showing
registration and fusion can be
removed from Figure 4. The
output can take several forms. It
can be as simple as a processed
image/video sequence displayed
on a screen; a cued display
where potential concealed
weapon types and locations are
highlighted with associated
confidence measures; a “yes,”
“no,” or “maybe” indicator; or a
combination of the above. The
image processing procedures
that have been investigated for
CWD applications range from
FIGURE 3: An example of image fusion simple de-noising to automatic
for CWD. (a) Image1: visual (b) Image2: pattern recognition.
MMW (c) Fused Image.
WAVELET APPROACHS FOR
PRE PROCESSING:

IMAGING PROCESSING Before an image or video


ARCHITECTURE: sequence is presented to a
human observer for operator-
assisted weapon detection or
fed into an automatic weapon
detection algorithm, it is
desirable to preprocess the
images or video data to
maximize their exploitation. The
preprocessing steps considered
in this section include
enhancement and filtering for
the removal of shadows,
wrinkles, and other artifacts.
FIGURE 4: An imaging processing When more than one sensor is
architecture overview for CWD used, preprocessing must also
include registration and fusion
procedures.

1) IMAGE DENOISING &


ENHANCEMENT THROUGH
WAVELETS:

Many techniques have


been developed to improve the
quality of MMW images in this
section, we describe a technique
for simultaneous noise
suppression and object
enhancement of passive MMW
video data and show some FIGURE: 5a) original frame 5b) de-
mathematical results. noised frame 5c) de-noised and
De-noising of the video enhanced frame by wavelet approach
sequences can be achieved
temporally or spatially. First,
temporal de-noising is achieved In above figure 5(a), which
by motion compensated shows a frame taken from the
filtering, which estimates the sample video sequence, the
motion trajectory of each pixel concealed gun does not show
and then conducts a 1-D clearly because of noise and low
filtering along the trajectory. contrast. The images in Figure
This reduces the blurring effect 5(b) show the de-noised frame
that occurs when temporal by motion-compensated
filtering is performed without filtering. The frame was then
regard to object motion spatially de-noised and
between frames. The motion enhanced by the wavelet
trajectory of a pixel can be transform methods. Four
estimated by various algorithms decomposition levels were used
such as optical flow methods, and edges in
block-based methods and The fine scales were detected
Bayesian-methods. If the motion using the magnitude and angles
in an image sequence is not of the gradient of the multi-
abrupt, we can restrict the scale edge representation. The
search to a small region in the threshold for de-noising was
subsequent frames for the 15% of the maximum gradient
motion trajectory. For additional at each scale. Figure 5(c) shows
de-noising and object the final results of the contrast
enhancement, the technique enhanced and demised frames.
employs a wavelet transform Note that the image of the
method that is based on multi handgun on the chest of the
scale edge representation. subject is more apparent in the
enhanced frame than it is in the
original frame. However, CWD application. Here, we
spurious features such as glint describe a registration approach
are also enhanced; higher-level for images taken at the same
procedures such as pattern time from different but
Recognition has to be used to Nearly collocated (adjacent and
discard these undesirable parallel) sensors based on the
features. maximization of mutual
information (MMI) criterion. MMI
states that two images are
2) CLUTTER FILTERING: registered when their mutual
information (MI) reaches its
Clutter filtering is used to maximum value. This can be
remove unwanted details expressed mathematically as
(shadows, wrinkles, imaging the following:
artifacts, etc.) that are not
needed in the final image for
human observation, and can
adversely affect the
performance of the automatic
recognition stage. This helps
improve the recognition where F and R are the images to
performance, either operator- be registered. F is referred to as
assisted or automatic. For this the floating image, whose pixel
purpose, morphological filters coordinates (˜x) are to be
have been employed. Examples mapped to new coordinates on
of the use of morphological the reference image R. The
filtering for noise removal are reference image R is to be re-
provided through the complete sampled according to the
CWD example given in Figure. A positions defined by the new
complete description of the coordinates Ta( ˜x), where T
example is given in a later denotes the transformation
section. model, and the dependence of T
on its associated parameters a
is indicated by the use of
3) REGISTRATION OF MULTI- notation Ta. I is the MI similarity
SENSOR IMAGES: measure calculated over the
region of overlap of the two
As indicated earlier, images and can be calculated
making use of multiple sensors through the joint histogram of
may increase the efficacy of a the two images the above
CWD system. The first step criterion says that the two
toward image fusion is a precise images F and R are
alignment of images (i.e., image
registration). registered through Ta* when a*
Very little has been reported on globally optimizes the MI
the registration problem for the measure, a two-stage
registration algorithm was
developed for the registration of
IR images and the
corresponding MMW images of 1V) IMAGE DECOMPOSITION:
the first generation. At the first
stage, two human silhouette The most straightforward
extraction algorithms were approach to image fusion is to
developed, followed by a binary take the average of the source
correlation to coarsely register images, but this can produce
the two images. undesirable results such as a
decrease in contrast. Many of
the advanced image fusion
methods involve multi resolution
image decomposition based on
the wavelet transform. First, an
image pyramid is constructed
for each source image by
applying the wavelet transform
to the source images. This
transform domain
representation emphasizes
important details of the source
images at different scales,
which is useful for choosing the
best fusion rules. Then, using a
feature selection rule, a fused
pyramid is formed for the
composite image from the
pyramid coefficients of the
source images. The simplest
FIGURE 6: A CWD EXAMPLE feature selection rule is
choosing the maximum of the
two corresponding transform
The purpose was to provide values. This allows the
an initial search point close to integration of details into one
the final solution for the second image from two or more images.
stage of the registration Finally, the composite image is
algorithm based on the MMI obtained by taking an inverse
criterion. pyramid transform of the
In this manner, any local composite wavelet
optimizer can be employed to representation. The process can
maximize the MI measure. be applied to fusion of multiple
One registration result obtained source imagery. This type of
by this approach is illustrated method has been used to fuse
through the example given in IR and MMW images for CWD
Figure 6. application [7]. The first fusion
example for CWD application is Object extraction is an
given in Figure 7. Two IR images important step towards
taken from separate IR cameras automatic recognition of a
from different viewing angles weapon, regardless of whether
are considered in this case. The or not the image fusion step is
advantage of image fusion for involved. It has been
this case is clear since we can successfully used to extract the
observe a complete gun shape gun shape from the fused IR and
only in the fused image. The MMW images. This could not be
second fusion example, fusion achieved using the original
of IR and MMW images, is images alone. Segmented
provided in Figure 6. result from the fused IR and
MMW image is shown in Figure
6.

CHALLENGES:

There are several


challenges ahead. One critical
issue is the challenge of
performing detection at a
distance with high probability of
detection and low probability of
false alarm. Yet another
difficulty to be surmounted is
forging portable multi-sensor
FIGURE 7: (a) and (b) are original IR instruments. Also, detection
images (c) is fused image systems go hand in hand with
subsequent response by the
AUTOMATIC WEAPON operator, and system
DETECTION: development should take into
account the overall context of
After preprocessing, the deployment.
images /video sequences can be
displayed for operator-assisted CONCLUSION:
weapon detection or fed into a
weapon detection module for Imaging techniques based on a
automated weapon detection. combination of sensor
Toward this aim, several steps technologies and processing will
are required, including object potentially play a key role in
extraction, shape description, addressing the concealed
and weapon recognition. weapon detection problem.
Recent advances in MMW sensor
SEGMENTATION FOR OBJECT technology have led to video-
EXTRACTION rate (30 frames/s) MMW
cameras. However, MMW
cameras alone cannot provide
useful information about the
detail and location of the
individual being monitored. To
enhance the practical values of
passive MMW cameras, sensor
fusion approaches using MMW
and IR, or MMW and EO cameras
are being described. By
integrating the complementary
information from different
sensors, a more effective CWD
system is expected.

REFERENCES:

1. An Article from “IEEE SIGNAL


PROCESSING MAGAZINE” March
2005 pp. 52-61

2. www.wikipedia.org

3. N.G.Paulter, “Guide to the


technologies of concealed
weapon imaging and
detection,” NIJ Guide 602-00,
2001

4. www.imageprocessing.com

You might also like