You are on page 1of 13

www.ijrsa.

org

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014


doi: 10.14355/ijrsa.2014.0404.02

Wide-Angle High Resolution SAR Imaging


and Robust Automatic Target Recognition of
Civilian Vehicles
Deoksu Lim1, Luzhou Xu2, Yijun Sun3, Jian Li*4
Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA

1,2,*4

Department of Microbiology and Immunology, The State University of New York, Buffalo, NY 14214, USA

lemduck@ufl.edu; 2xuluzhou@ufl.edu; 3yijunsun@buffalo.edu; *4li@dsp.ufl.edu

Abstract
This paper focuses on wide-angle synthetic aperture radar
(SAR) imaging and automatic target recognition of civilian
vehicles. A recently proposed hybrid data adaptive method
is applied to generate accurate and sparse SAR images of
civilian vehicles. We combine projection slice theorem (PST)
with 2-D FFT to obtain a more accurate pose estimation than
the established PST. Given the so-obtained pose estimates,
the horizontal and vertical cumulative-sum-vector (CSV)
profiles are utilized to focus the SAR image only on the
vehicle of current interest. The corresponding vertical CSV is
used as a simple feature for automatic target recognition
(ATR). We adopt the local learning based feature selection
for ATR. The effectiveness of the entire chain of imaging,
pose estimation, feature extraction, and ATR methods is
verified using the experimentation results based on the
publicly available GOTCHA SAR data set. We demonstrate
that the high resolution SAR imaging results in much
improved ATR performance compared to the conventional
SAR imaging.
Keywords
SAR; IAA; SLIM; Pose Estimation; PST; ATR; Local Learning
Based Feature Selection

Introduction
Synthetic aperture radar (SAR) systems have been
widely used in military and civilian applications
(Jakowatz, 1996). The employment of wide-angle SAR
imaging can improve the automatic target recognition
(ATR) performance (Dungan et al., 2012). This paper
focuses on wide-angle SAR imaging, pose estimation,
feature extraction, and automatic recognition of
civilian vehicles based on the publicly available
GOTCHA data set (Dungan et al., 2012). This data set
contains 31 circular orbits of a scene with a diameter
measuring 5 km, labeled from 214 to 244 in range. The
carrier frequency and the bandwidth of the radar are
9.7 GHz and 600 MHz, respectively. The data set

152

contains spotlight extracted phase history data of 33


civilian vehicles and 22 reflectors (Dungan et al., 2012).
In order to generate well-focused SAR images, the
prominent-point auto-focusing method (Carrara et al.,
1995) is used to correct motion-induced errors in the
phase-history data. Traditional radar systems assume
point scatterers, which might be appropriate for
narrow angular apertures. However, the scatterer
responses become angle-dependent in the wide-angle
case. Therefore, we divide the wide-angle aperture
into many narrow angular apertures or sub-apertures
and then apply a hybrid high resolution SAR imaging
method to obtain the imaging result for each subaperture. This hybrid method involves applying the
iterative adaptive approach (IAA) (Yardibi et al., 2010;
Glentis and Jakobsson, 2011) first to find accurate and
unbiased estimates and then the sparse learning via
iterative minimization (SLIM) method (Tan et al., 2011)
to obtain sparse imaging results by suppressing
sidelobes. We show, via comparison with the standard
back-projection (BP) method (Gorham and Moore,
2010), that such hybrid method can provide enhanced
image resolution as well as reduced sidelobe levels,
while maintaining high accuracy. The final wide-angle
image is obtained by non-coherently combining all the
sub-aperture images. The high resolution SAR images
look good visually, but how will they enhance the
ultimate goal of automatic target recognition (ATR)?
We will address this question herein.
First of all, the inconsistency of the target pose in the
wide-angle SAR image can result in ATR performance
degradation. It is therefore necessary to eliminate this
effect so that the targets in each image have the same
pose before target recognition. The shape of the target
in each wide-angle SAR image is approximately
rectangular with two dominant parallel long edges
(straight lines). The pose of the target can be obtained

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

by estimating the angles of these two parallel lines.


This is achieved by finding an angle associated with
the maximum peak obtained from projection slice
theorem (PST) (Munson et al., 1983) within an angle
range. The range center is the initial pose angle
acquired by the 2-D FFT based pose estimation
method (Gianelli and Xu, 2013).
After pose estimation, we reduce the size of the target
chip to focus only on the target of interest via utilizing
horizontal and vertical (row and column, respectively)
cumulative sum vector (CSV) profiles, which are
obtained from the wide-angle SAR image chips. The
vertical CSV of each target chip is extracted to be used
as a simple feature for target recognition.
We treat the target recognition problem as a multipleclass classification problem and address this problem
using the local learning based feature selection
technique (namely, the LOGO algorithm) (Sun et al.,
2010), which can be shown to provide very good target
recognition
performance.
Indeed,
the
ATR
performance, for example, is better than that based on
peak scatterer locations (Gianelli and Xu, 2013) from
each sub-aperture image.
The rest of this paper is organized as follows. Section
II elaborates the SAR imaging, pose estimation, feature
extraction, and the target recognition methods
successively. Section III provides the experimental
results to demonstrate the performance of the
proposed methods and the advantage of high
resolution SAR imaging. The paper is concluded in
Section IV.
Notation: Vectors and matrices are labeled using bold
lowercase and uppercase typefaces, respectively. The
transpose and conjugate transpose of a matrix or
vector are denoted as (.)T and (.)H, respectively. The
Euclidean norm of a vector is denoted by .. I
denotes the identity matrix of an appropriate
dimension. Other mathematical symbols are defined
after their first appearance.
Algorithms
In this section, we describe the hybrid adaptive
method for high resolution SAR imaging and
introduce the hybrid angle estimation method, which
means 2-D FFT + PST, to determine the pose of the
target in the so-obtained SAR image. Moreover, we
extract horizontal and vertical CSV of each SAR image
after pose estimation so as to reduce the target chip
size to focus only on the target of interest. The vertical

www.ijrsa.org

CSV is also used as a simple feature for automatic


target recognition. Finally, local learning based feature
selection technique (Sun et al., 2010) is presented for
civilian vehicle recognition.
SAR Imaging
To obtain focused SAR images, it is necessary to
compensate for the slant-range errors encountered in
wide-angle SAR. The quad-trihedral target, as shown
in Fig. 1, can be considered as a point target across the
entire aperture.

FIG. 1 QUAD-TRIHEDRALS (DUNGAN ET AL., 2012)

We segment the entrie aperture into overlapped 4degree sub-apertures. The fundamental BP algorithm
is applied to each sub-aperture to generate the
subimage of the quad-trihedral as shown in Fig. 2(a).

(a)

(b)

(c)

(d)

FIG. 2 SENSOR FOCUSING (BP). (A) QUAD-TRIHEDRAL IMAGE


IN A SUB-APERTURE, (B) ESTIMATED RANGE ERROR, (C)
FUSED QUAD-TRIHEDRAL IMAGE WITHOUT FOCUSING, (D)
FUSED QUAD-TRIHEDRAL IMAGE WITH FOCUSING

We assume that the object exists at the exact location


specified in the provided data, and compute the
objects deviation from this point across the aperture.
We define , as the offset of the quad-trihedral
location within the BP sub-aperture image from the
expected location. The slant-range error for each
sub-aperture is given by (see Fig. 2(a))

153

www.ijrsa.org

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

= cos( ) cos( ) + cos( ) sin( ),

(1)

where and are the averaged elevation and


azimuth angles for the sub-aperture, respectively.

After the range error value is obtained in each subaperture, a robust polynomial regression is conducted
to get a parameterized representation of the pulse to
pulse range error as shown in Fig. 2(b), which is then
applied to the raw phase history data to handle this
pulse to pulse range error. The focusing method can
compensate for the range error and yield well-focused
imaginary (see Fig. 2(c) and (d))

For each sub-aperture after focusing, we segment the


entire aperture into overlapped 2-degree subapertures. The radar return ( ) from the scatterer
with location = (, , ) and for the th pulse with
instantaneous frequency can be written as:
( ) =

= 1, , ,

+ ( ),

= 1, , ,

(2)

where = , , and ( ) denote the

radar platform location and the additive noise for the


th pulse, respectively. Additionaly, , , , and
are the corresponding reflection coefficient, the speed
of light, the number of pulses and frequency grid
points, respectively.
Also, let the predefined imaging area be divided into
potential scattering points , for = 1, , and
= 1, , . Hence there are pixels in each subaperture image. The received signal ( ) acquired

for the th pulse and instantaneous frequency can


then be written as

( ) = ,
=1 =1

4
,

+ ( ) . (3)

Data-independent approaches for SAR imaging suffer


from high sidelobe level and poor resolution problems.
To overcome these limitations, a hybrid method
utilizing the IAA and SLIM algorithms is used for SAR
imaging. The hybrid algorithm starts with the IAA
method where nine iterations of segmented IAA
(SIAA) are applied to obtain the preliminary results
followed by one fast IAA (FIAA) iteration to provide
accurate and unbiased SAR images of target chips.
Then at the conclusion of the IAA implementation,
two iterations of SLIM with 0 are applied to
induce sparsity in the target chip SAR images. This
hybrid method is referred to as H-SIAA-FIAA-SLIM0
(Glentis et al., 2013).
Specifically, to obtain accurate and sparse estimates of
the vector , we apply as follows the hybrid SIAAFIAA-SLIM0 method, which initially estimates the
pixel values of the sub-aperture image using the
segmented IAA (SIAA) and the fast IAA (FIAA) by
minimizing the following weighted least squares
problem (Yardibi et al., 2010; Glentis and Jakobsson,
2011):
(5)

, = , ,
, ,

(6)

and

(4)

which can be viewed as a sparse representation

problem, where = 1,1 1, , is a sparse


vector of target pixels to be estimated, given the

measurement vector = [1 (1 ) 1 ( ) 2 (1 ) ( ) ,

the known dictionary = 1,1 1, , , , and the

unknown noise vector = [1 (1) 1 ( )2 (1) ( ) .

(Note that the term ,


in (3)
corresponds to the 1 + th element of
the vector , , for = 1, , and = 1, , .)
154

1
,

= =1 =1, ,
, .

(7)

Minimizing the cost function in (5) gives the following


solutions:
, =

which can be rewritten as

The expression in (3) can be represented in a compact


form:
= + ,

, ,

where 21 1
, ,
,

min

, =

, ,

, , ,

, ,

(8)

(9)

by using the Woodbury matrix inversion lemma


(Woodbury, 1950) and (6). Instead of computing 1
,
for each pixel in Eq. (8), Eq. (9) only computes 1 for
once and hence saves computations significantly. Note
that the estimate of , requires the knowledge of
and vice versa. SIAA in the first step needs possibly
overlapping segments of size 1 2 , with 1 < and
2 < . Then the 2-D SIAA is formed by iterating

and

, =

,
,
1 ,

= 1, ,

(10)

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

= =1 =1 , ,

(11)

until convergence, where

, = =1 ,

(12)

and the estimates of , are initialized using the

matched filter. In (10), = vec{ } and is the th


segment of mat{} of size , where vec{} denotes
column-wise vectorization and mat{} the inverse
operation, recreating the matrix from the vectorized

matrix. Following the initialization of using the


,

= in (10), we apply the


matched filter by setting
SIAA implementation to save computational cost.

Next, the more accurate FIAA estimator is employed


to refine the estimation results. The initial covariance
matrix for FIAA is calculated using , obtained at
the conclusion of the SIAA iterations. The 2-D FIAA is
then formed by iterating:
= =1 =1 , ,
,

and

, =

, ,

(13)

(14)

1 in (10) and 1 in (14) are computed using the


The
Gohberg-Semencul (GS) factorization (Glentis et al.,
2013; Xue et al., 2011). The hybrid method then
achieves sparsity by applying two iterations of the
sparse learning via iterative minimization (SLIM)
method (Tan et al., 2011), which considers the
following hierarchical Bayesian model (with a sparsity
promoting
prior):
|, ~(, ), ()
2

, exp , 1

and

() 1 .

SLIM

, } by minimizing the negative logarithm


estimates {
of the posterior density function given by:
(, ) =

2
2

+ , 1

+ log ,

=1 =1

for 0 < 1.

(15)

Note that 0 promotes the highest sparsity.


Minimizing the cost function in (15) gives the
following estimates for the pixel values and noise
variance, respectively:

and

= ( + )1


2 ,

www.ijrsa.org

gradient (CG) algorithm (Glentis et al., 2013; Vu et al.,


2012). An initial estimate of for SLIM is calculated as
= ||2 with , obtained as in (14) via FIAA and
is initialized as

ini =

2 .

(18)

Then SLIM is applied by iterating (16) and (17) twice


(see Table 1 for steps of the H-SIAA-FIAA-SLIM0
algorithm). The final wide-angle image is obtained by
combining all the sub-apertue images using noncoherent max magnitude operator.
TABLE 1 MAIN STEPS OF THE H-SIAA-FIAA-SLIM0 ALGORITHM

= in(10).
Initialize using the matched filter with
SIAA
Repeat the following two steps for 9 iterations
from (11) and (12).
Step 1: Obtain

Step 2: Estimate , using (10) for = 1, , and


= 1, , at = 1, , ;
FIAA
Apply once
Step 1: Calculate using (13) with , obtained at the
conclusion of SIAA iterations.
Step 2: Estimate , using (14);
SLIM
Repeat Step 2 and Step 3 for two iterations
Step 1: Calculate = ||2 with , obtained from
FIAA and initialize as in (18).
Step 2: Update , using (16) for = 1, , and
= 1, , .
|2 and using (17);
Step 3: Update = |

From Eq. (3), the conventional BP method is given by


,

4
1

=
( ) , .

(19)

The inner summation is implemented by using the


FFT and linear interpolation technique (Gorham and
Moore, 2010).
Pose Estimation
Target pose estimation and registration are critical preprocessing steps for automatic target recognition (ATR)
of civilian vehicles as the ATR performance can
degrade owing to the random pose of the targets after
image formation.

(16)

(17)

|2 . The matrix
where = diag() and = |
inversion in (16) is computed using the conjugate

FIG. 3 IMAGING PRINCIPLE OF A VEHICLE (SDMS, 2014)

155

www.ijrsa.org

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

() = ( cos() sin(),

sin() + cos()),

FIG. 4 APPROXIMATE STRAIGHT LINES (MARKED IN RED)

For pose estimation, we use the imaging principle of a


vehicle highlighted in Fig. 3. As shown in Fig. 3, the
direct returns, primarily from the roofline of the
vehicle, appear as the outer circle in the imaging result
shown in Fig. 4 due to the layover effect. The doublebounce reflections from the vehicle body resemble
dihedral returns and they form the inner circle with
two approximately straight parallel lines, as shown in
Fig. 4. We use the angle of the lines to determine the
pose of the vehicle.
The two parallel lines have the majority of the energy
in the SAR image of the vehicle. By using this fact, we
can use the Projection Slice Theorem (PST) (Munson et
al., 1983), which is used to find line-integrated
amplitudes in the direction of vertical axis rotated by
an angle. Consider a generic problem with image
pixels (, ) (for the SAR image of a target chip,
(, ) is the modulus of the image.) as shown in Fig.
5(a). PST yields a 2-D accumulated amplitude matrix
on the row dimension denoting the horizontal axis
rotated by the angle and the column dimension ,
respectively. The angle with the maximum peak,
obtained from the 2-D matrix, is used to decide the
pose of the vehicle. The PST uses two ways to obtain a
projection slice, which are a brute force method and a
slice on Fourier domain. The brute force method
induces a direct projection on horizontal axis as
shown in Fig. 5(a).

(a)

(b)

FIG. 5 PST: (A) LINE OF INTEGRATION (BRUTE FORCE


METHOD), (B) KNOWN SAMPLES OF (, ) (SLICE METHOD)

The projection of (, ) at angle is given by


156

(20)

where () calculated at = 0 is a line integral in the


direction of the axis. Its complexity is ( 3 ) for an
image with the size , whereas the slice method
on Fourier domain has the ( 2 log ) complexity
(Munson et al., 1983). The 1-D Fourier transform of
() is given by

() = () .

(21)

() = ( cos(), sin()),

(22)

Using Eq. (21) and based on PST, we get

(, ) (+) .

where (, ) =
Eq. (22)
suggests that the 1-D Fourier transform of the
projection at an angle is a slice of the 2-D transform
(, ) taken at the same angle with reference to the
axis as shown in Fig. 5(b). A value at the point not
on the uniform cartesian grid is obtained via bilinear
interpolation
(BI)
(Wikipedia,
2014).
The
computational complexity of PST is higher than that of
the 2-D FFT based pose estimation method (Gianelli
and Xu, 2013). We consider below the hybrid of 2-D
FFT and PST.
The hybrid pose estimation method computes the 2-D
Fourier transform of the modulus of the SAR image
(after subtracting out its mean value from itself in the
first step). The size on each dimension of the 2-D
Fourier transform is the nearest power of 2 value
greater than that of the image dimension under
observation. A value at the point not on the uniform
cartesian grid is acquired via nearest neighbor
interpolation (NNI) in the process of the 2-D FFT
based angle estimation.

FIG. 6 2-D FOURIER TRANSFORM OF THE MODULUS


OF THE SAR IMAGE IN FIG. 4

The initial angle estimate ini , which is the slope of a


line having the majority of the energy in the Fourier
domain as shown in Fig. 6, is given by

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

ini = arg

max

90<90

|( cos() , sin())| . (23)

ini is used to create an angle range of interest in (),


as shown in Fig. 7(a).

(a)

www.ijrsa.org

of the original target chip to focus only on the target of


interest (see Fig. 9) by generating the horizontal and
vertical cumulative sum vectors (CSVs) of the image
via summing the rows and columns of the modulus
image into vectors.

(b)

(c)
FIG. 7 HYBRID POSE ESTIMATION AFTER THE 2-D FOURIER
TRANSFORM ( = 5)

The PST is assigned to the range ini , ini +


(see the orange rectangle in Fig. 7(a)), where
denotes the interval of interest around ini . We find
the final angle from the maximum peak (see the pixel
indicated by the red arrow in Fig. 7(c)) at ()
obtained from 1-D inverse Fourier transform of
() within ini , ini + .

The final angle is perpendicular to the pose angle of


the vehicle, which is decided between straight parallel
lines in Fig. 4 and the horizontal axis. Its range is
within [90, 90) . This hybrid pose estimation
method yields proper pose correction as shown in Fig.
8. The effect of is discussed further in Section III.

FIG. 9 PICKING OUT A TIGHT TARGET CHIP

The boundaries (marked as green dots in Fig. 9) of the


target can be determined based on the two CSVs, and
a smaller target chip can be extracted from the original
one. As shown in Fig. 9, the extracted target chip has
a 180 ambiguity, which means that the front of the
civilian vehicle cannot be distinguished from the rear.
We select a simple target feature, which is insensitive
to this ambiguity for target recognition. This feature
reflects important information of the target as shown
in the following equations:

= , , for = 1, , ,
=1

max( )
2

(24)

= , , for = 1, , ,
=1

,
max( )

where , denotes the reflection coefficient after pose


FIG. 8 ORIGINAL TARGET CHIP AFTER POSE CORRECTION

Feature Extraction and Alignment


Once the pose of the target has been estimated, a
rotation of the image is performed so that each target
is of the same pose. Moreover, we can reduce the size

correction. We use this vertical CSV feature for target


feature registration. As shown in Fig. 10, we determine
the two peaks of the vertical CSV and select the center
of the peaks as a reference point. In order to facilitate
the subsequent target recognition, the center of the
two peaks for each image are then aligned and zeros
are affixed to the beginning and the end of the vectors
to make their length equal.

157

www.ijrsa.org

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

(A) FIND THE TWO HIGHEST PEAKS

(B) FIND THE CENTER BETWEEN THE TWO HIGHEST PEAKS

(C) OVERLAP THE CENTER


FIG. 10 FEATURE REGISTRATION

Automatic Target Recognition


For automatic target recognition of civilian vehicles,
we consider three types of targets based on the data,
namely, SEDAN, SUV and VAN. We formulate it as a
multiple-class classification problem. Since each target
is represented as a high-dimensional vector where
only a small fraction of features contain relevant
information for classification, a commonly used
practice is to first perform a feature selection and then
apply a classification algorithm to selected features to
construct a prediction model. A major drawback of the
above strategy is that it is computationally tedious to
estimate parameters separately for the feature
selection and the classification algorithm. Moreover,
features selected in the first step may not be optimal
for the classification algorithm used in the second step.
Therefore, a more plausible approach is to perform
feature selection and the classification simultaneously.
To this end, we use our recently developed local
learning based feature selection technique (Sun et al.,
2010). The algorithm is initially proposed for feature
selection, but we demonstrate here that it can be easily

158

extended for classification. We give below a brief


description of the algorithm. Since this subsection is
independent of the previous subsections, the notation
is independent as well for simplicity.
Let = {( , )}=1 {1, 2, 3} be a training data
set with three types of targets, where and are the
number of training points and the dimension of ,
respectively. Given a sample , we start by defining a
margin. We first find two nearest neighbors for , one
from the same class (termed as nearest hit or NH) and
the other from a different class (termed as nearest miss
or NM). The margin of is then defined as:
= NM( )1 NH( )1 ,

(25)

where 1 is the 1 norm or the Manhattan distance.


An intuitive interpretation of the above margin is a
measure of how much can be corrupted by noise
before being misclassified (Crammer et al., 2002). By
large-margin theory (Vapnik, 2000; Schapire et al.,
1998), a learning algorithm that minimizes a marginbased error fuction usually generalizes well on unseen
test data. Then, a natural idea is to scale each feature,

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

and thus obtain a weighted feature space,


parameterized by a nonnegative vector , so that a
margin-based error function in the induced feature
space is minimized. The magnitude of each element of
measures the relevance of the corresponding feature.
The margin of , computed with respect to , is given
by

() = NM( )|1 NH( )|1


= ,
(26)

where = | NM( )| | NH( )| and ||


denotes an element-wise absolute operator. Note that
() is a linear function of , and the margin thus
defined requires only information about the
neighborhood of , while no assumption is made
about the underlying data distribution. This means
that by local learning we can transform an arbitrary
nonlinear problem into a set of locally linear ones (Sun
et al., 2010). The local linearization of a nonlinear
problem enables us to estimate the feature weights by
using a linear model that has been extensively studied
in the literature. The main problem with the above
margin definition, however, is that the nearest
neighbors of a given sample are unknown before
learning. With the presence of a large number of
irrelevant features, the nearest neighbors identified in
the original space can be different from those in the
weighted feature space. To account for the uncertainty
in defining local information, we use the probabilistic
model where the nearest neighbors of the given
sample are treated as hidden variables. Following the
principles of the expectation-maximization algorithm
(Dempster et al., 1977), we estimate the margin by
computing the expectation of () via averaging out
the hidden variables:
() = E~ E~
= = NM( )|

= NH( )|

,
=

(27)

where = : 1 , , = {: 1
, = , } , and E~ is the expectation
calculated with regard to . The probabilities
= NM( )| and = NH( )| are given by:
= NM( )| =

, ,

www.ijrsa.org

= NH( )| =

, ,

(28)

where () denotes a kernel function given by


() = exp( ). Here, is a user defined parameter
determing the resolution at which the data is analyzed
locally, which can be estimated through crossvalidation.
Once we define the margin, we form the following
optimization problem to estimate in the logistic
regression formulation (Bishop, 2006):

min log1 + exp(


) ,

=1

s. t. 0.

In order to obtain a sparse solution of , we impose an


1 penalty to and the corresponding optimization
problem becomes:

min log1 + exp(


) + 1 , s. t. 0,

=1

(30)

where is a regularization parameter that controls the


sparseness of . Since is a nonnegative weight, it
cannot be solved directly with gradient descent. We
thus replace (30) with the following cost function:

min log 1 + exp 2 () + 22 , (31)

=1

where 2 = for 1 . The above problem can


be easily solved via gradient descent with the
following update rule:


=1

exp 2 ()

1 + exp 2 ()

, (32)

where is a learning rate that can be determined


through the standard line search, and is the
Hadamard operator. The values of both the kernel
width and the regularization parameter can be
estimated through cross-validation using the training
dataset.
Once the optimal weight is estimated, the class label
of a test sample can be estimated as follows:
arg max

() , s. t.

Experimental Results
In

this

section,

we

proceed

{1,2,3}.
to

examine

(33)

the

159

www.ijrsa.org

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

performance of the discussed SAR imaging, pose


estimation, feature extraction, and target recognition
methods on the experimentally measured 2008
GOTCHA data set (Dungan et al., 2012).
We start by investigating the performance of the HSIAA-FIAA-SLIM0 approach for SAR imaging. The
imaging results of three typical civilian vehicles using
the conventional BP algorithm and H-SIAA-FIAASLIM0 are shown in the left and right columns of Fig.
11, respectively, from which we can see that the hybrid
method provides higher resolution and lower sidelobe
levels compared to BP. One can also observe from Fig.
11 that the gaps between the inner and outer rings are
different for different target types (largest moderate,
and smallest gaps for VAN, SUV, and SEDAN,
respectively).

(a)

(b)

the dynamic range of [30, 0] dB after the 2-D Fourier


transform and normalization, as shown in Fig. 6. Table
2 shows that PST alone yields poor ATR performance
and also requires a comparatively large computational
complexity. 2-D FFT + PST provides better ATR
performance as well as requires lower computation
compared to PST, as shown in Table 2.
TABLE 2 ATR PERFORMANCE RESULTS WHEN POSE ESTIMATION

OBTAINED VIA 2-D FFT, PST, AND 2-D FFT + PST WITH
RESOLUTION SAR IMAGES)

Method
2-D FFT
PST
2-D FFT+PST
M.C. 1
1

Recognition rate(%)
93.63
83.14
94.01
95.88

= 5(ON HIGH

Run Time(sec)
0.01190
0.02237
0.01494

Manual correction of the visually estimated pose.

From Table 2, we observe that the computational


complexity of 2-D FFT + PST is slightly greater than
that of 2-D FFT. However, the ATR rate improvement
justifies its usage. The ATR performance of 2-D FFT +
PST almost approaches that obtained by using manual
correction (M.C.) of the visually estimated pose, as
shown in Table 2. Note also that the ATR performance
of 2-D FFT + PST is not sensitive to the choice of , as
shown in Table 3.
TABLE 3 ATR PERFORMANCE FOR VARIOUS FOR 2-D FFT + PST (ON
HIGH RESOLUTION SAR IMAGES)

Recognition Rate(%)

(c)

(d)

(e)

(f)

FIG. 11 IMAGING RESULTS


(FIRST ROW: SEDAN, MIDDLE ROW: SUV, BOTTOM ROW: VAN;
LEFT COLUMN: BP, RIGHT COLUMN: H-SIAA-FIAA-SLIM0)

As shown in Fig. 11, the two parallel straight lines in


each SAR image can be used to determine the pose of
the corresponding target. We compare the
performance of the 2-D FFT based method, the
projection slice theorem (PST), and the proposed
hybrid method (2-D FFT + PST). We consider data in

160

2.5
94.01

5.0
94.01

7.5
94.01

After the target pose estimation, we rotate the SAR


image to make the parallel straight lines vertical. By
using the proposed horizontal and vertical CSV, we
reduce the target chip size to focus only on the target
of interest. The vertical CSV is then extracted and used
as a simple feature for target recognition. The first,
second, and last rows of Fig. 12 show the vertical CSVs
(VCSVs) of the SEDAN, SUV, and VAN, respectively,
while the vertical CSVs obtained from BP imaging
result are shown in the left and middle columns,
respectively. The VCSVs in the left column are
acquired from BP images rotated by pose estimates
calculated from the BP images, and ones in the middle
are obtained from BP images done by pose estimates
from H-SIAA-FIAA-SLIM0 images. The VCSVs in the
right column are acquired from H-SIAA-FIAA-SLIM0
images rotated by pose estimates from the H-SIAAFIAA-SLIM0 images. We can see from Fig. 12 that the
vertical CSV profiles, obtained from the hybrid high
resolution SAR imaging method, clearly reveal the two
strong peaks in the middle and the two weaker peaks

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

www.ijrsa.org

are listed in Table 4.

on two sides while those obtained from BP fail to do


so. This is so in spite of the fact that we used the high
resolution H-SIAA-FIAA-SLIM0 SAR images to
determine the target poses even for the BP images to
obtain Fig. 12.
Finally, the local learning based feature selection for
high dimensional data analysis is applied to the
features of the target chips obtained via H-SIAAFIAA-SLIM0 and to those of target chips obtained via
BP. We divde the data into two groups, one is for
training (116 SEDANs, 91 SUVs, and 61 VANs) and
the other for testing (115 SEDANs, 91 SUVs, and 61
VANs). This method uses the weight to obtain
reduced dimension through feature selection with
training points of .

FIG. 13 FEATURE-SELECTED WEIGHT


TABLE 4 RECOGNITION RATE COMPARISON OF THE FEATURES OBTAINED
FROM THE BP AND H-SIAA-FIAA-SLIM0 IMAGING RESULTS.

The dimension reduces from 310 1 (the size of the


original 1-D space) to 55 1 (the size of a feature
weighted space) as shown in Fig. 13.

Method
BP
H-SIAA-FIAA-SLIM0

Rate(%)
61.42
94.01

The recognition rates with respect to features obtained


from the BP and H-SIAA-FIAA-SLIM0 imaging results

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

FIG. 12 TARGET VERTICAL CSV PROFILES: TOP ROW: SEDAN, MIDDLE ROW: SUV, BOTTOM ROW: VAN; LEFT COLUMN: BP IMAGE
BASED POSE CORRECTION ON BP IMAGE, MIDDLE COLUMN: H-SIAA-FIAA-SLIM0 IMAGE BASED POSE CORRECTION ON BP IMAGE,
RIGHT COLUMN: H-SIAA-FIAA-SLIM0 IMAGEBASED POSE CORRECTION ON H-SIAA-FIAA-SLIM0 IMAGE

161

www.ijrsa.org

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

The recognition rate of BP in Table 4 is obtained from


pose estimates based on BP SAR images (BP pose
correction on BP SAR images). We observe from Table
4 that the hybrid high resolution H-SIAA-FIAA-SLIM0
method yields a much higher target recognition rate
than the BP method. This is also better than the 90%
recognition rate offered by ATR based on high
resolution SAR imaging and utilizing scatterer
locations across sub-apertures (Gianelli and Xu, 2013).
Conclusions
We have demonstrated the merits of high resolution
wide-angle SAR imaging for enhanced automatic
target recognition of civilian vehicles. The hybrid high
resolution SAR imaging method yields high-quality
imaging results, from which simple and representative
target features can be extracted. In addition, the
proposed pose estimation method and the simple CSV
profiles facilitate the SAR image registration, which
serves as the preprocessing procedure for target
recognition. We have also extended the algorihm of
local learning based feature selection for high
dimensional data analysis to multiple-class target
classification. A combination of the hybrid high
resolution SAR imaging, the effective and efficient
pose estimaton, the simple CSV profiles, and the
reliable target recognition method has made it possible
to distinguish the three civilian vehicle types, namely,
SEDAN, SUV, and VAN with high fidelity.
ACKNOWLEDGMENT

This work was supported in part by NSF CCF-1218388.


The views and conclusions contained herein are those
of the authors and should not be interpreted as
necessarily representing the official policies or
endorsements, either expressed or implied, of the U.S.
Government. The U.S. Government is authorized to
reproduce and distribute reprints for Governmental
purposes notwithstanding any copyright notation
thereon. Deoksu Lim, Luzhou Xu, and Jian Li are with
the Department of Electrical and Computer
Engineering, University of Florida, Gainesville, FL
32611,
USA
(email:
lemduck1@ufl.edu;
xuluzhou@ufl.edu; li@dsp.ufl.edu). Yijun Sun is with
the Department of Microbiology and Immunology,
The State University of New York, Buffalo, NY 14214,
USA (email: yijunsun@buffalo.edu).

Learning. Springer.
Carrara, W. G., Majewski, R. M., and Goodman, R. S., (1995).
Spotlight Synthetic Aperture Radar: Signal Processing
Algorithms, Artech House.
Crammer, K., Gilad-Bachrach, R., Navot, A., and Tishby, N.,
(2002).Margin analysis of the LVQ algorithm. In
Proceedings

of

Advances

in

Neural

Information

Processing Systems, vol. 2, pp. 462-469.


Dempster, A. P., Laird, N. M., and Rubin, D. B., (1977).
Maximum Likelihood from Incomplete Data via the EM
Algorithm. J. Royal Statistical Soc., Series B, vol. 39, no. 1,
pp. 1-38.
Dungan, K. E. et al., (2012).Wide Angle SAR Data for Target
Discrimination Research. In [Algorithms for Synthetic
Aperture Radar Imagery XIX], E. G. Zelnio and F. D.
Garber, eds., Proc. SPIE 8394.
Gianelli, C. D., and Xu, L., (2013).Focusing, imaging, and
ATR for the Gotcha 2008 wide angle SAR collection. In
[Algorithms for Synthetic Aperture Radar Imagery XX],
E. G. Zelnio and F. D. Garber, eds., Proc. SPIE 8746, June.
Glentis,

G.-O.,

and

Jakobsson,

A.,

(2011).

Efficient

Implementation of Iterative Adaptive Approach Spectral


Estimation Techniques. IEEE Trans. Signal Process., vol.
59, no. 9, pp. 4154-4167, Sep.
Glentis, G.-O., Zhao, K., Jakobsson, A., and Li, J., (2013).
Non-Parametric High-Resolution SAR Imaging. IEEE
Trans. on Signal Process., vol. 61, no. 7, pp. 1614-1624,
Apr.
Gorham, L. A., and Moore, L. J., (2010). SAR image
formation toolbox for MATLAB. In [Algorithms for
Synthetic Aperture Radar Imagery XVII], E. G. Zelnio
and F. D. Garber, eds., Proc. SPIE 7699, Apr.
Jakowatz, C. V. Jr., Wahl, D. E., Eichel, P. H., Ghiglia, D. C.,
and Thompson, P. A., (1996). Spotlight-Modes Synthetic
Aperture Radar: A Signal Processing Approach, Springer.
Munson, D. C. Jr., OBrien, J. D., and Jenkins, W. K., (1983).A
tomographic formulation of spotlight-mode synthetic
aperture radar. Proc. IEEE, vol. 71, pp. 917-925.
Schapire, R. E., Freund, Y., Bartlett, P., and Lee, W. S., (1998).
Boosting the margin: A new explanation for the
effectiveness of voting methods. The Annals of Statistics,

REFERENCES

Bishop, C. M., (2006). Pattern Recognition and Machine

162

vol. 26, no. 5, pp. 1651-1686.


SDMS, (2014). Visualization of ray tracing with Taurus

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

www.ijrsa.org

wagon CAD model. Sensor Data Management System

recognition (ATR) in synthetic aperture radar (SAR).

(SDMS) Public web site.

Luzhou Xu received the B. Eng. and


M.S. degree in electrical engineering
from Zhejiang University, Hangzhou,
China, in 1996 and 1999, respectively,
and the Ph.D. degree in electrical
engineering from the University of
Florida, Gainesville, in 2006. He is
currently the Chief Engineering of the
Integrated Adaptive Applications (IAA), Inc, and an adjunct
research associated professor of University of Florida. He
was with Zhongxing Research and Development Institute,
Shanghai, China, from 1999 to 2001, with Philips Research
East Area, Shanghai, China, from 2001 to 2003, and with
Array Comm LLC, San Jose, CA, from 2006 to 2008. His
research interests include statistical and array signal
processing and their applications.

https://www.sdms.afrl.af.mil/index.php?collection=cv_do
me.
Shapiro, L. G., and Stockman, G. C., (2001). Computer Vision,
Prentice Hall.
Sun, Y.,Todorovic, S., and Goodison, S., (2010). Local
Learning Based Feature Selection for High Dimensional
Data Analysis. IEEE Trans. on Pattern Analysis and
Machine Intelligence, vol. 32, no. 9, pp. 1610-1626, Sep.
Tan, X., Roberts, W., Li, J., and Stoica, P., (2011). Sparse
Learning via Iterative Minimization with Application to
MIMO Radar Imaging. IEEE Trans. Signal Process., vol.
59, no. 3, pp. 1088-1101, Mar.
Vapnik, V., (2000). The Nature of Statistical Learning Theory,
New York: Springer.
Vu, D., Xu, L., Xue, M., and Li, J., (2012). Nonparametric
Missing Sample Spectral Analysis and Its Applications to
Interrupted SAR. IEEE J. Select. Top. Signal Process., vol.
6, no. 1, pp. 1-14, Feb.
Wikipedia, (2014). Bilinear interpolation. Wikipedia.org.
http://en.wikipedia.org/wiki/Bilinear_interpolation.
Woodbury, M. A., (1950). Inverting modified matrices.
Memorandum Rept. 42, Statistical Research Group,
Princeton University, Princeton, NJ.
Xue, M., Xu, L., and Li, J., (2011). IAA Spectral Estimation:
Fast Implementation

Using the

Gohberg-Semencul

Factorization. IEEE Trans. Signal Process., vol. 59, no. 7,


pp. 3251-3261, Jul.
Yardibi, T., Li, J., Stoica, P., Xue, M., and Baggeroer, A. B.,
(2010).

Source

Localization

and

Sensing:

Nonparametric Iterative Adaptive Approach Based on


Weighted Least Squares. IEEE Trans. on Aerospace and
Electronic Systems, vol. 46, no. 1, pp. 425-443, Jan.
Deoksu Lim received B.S. and M.E.
degrees in electronics & information
engineering from Korea University,
Korea, in 2000 and 2003, respectively.
He also received his M.S. in electrical
and computer engineering from the
University of Florida, in 2012. He is
currently pursuing a Ph.D. degree in the
Department of Electrical and Computer Engieering at the
University of Florida, Gainesville, Flordia.
His primary research interest is on the automatic target

Yijun Sun received two BS degrees in


electrical and mechanical engineering
from Shaghai Jiao Tong University,
China, in 1995, and the MS and PhD
degrees in electrical engineering from
the University of Florida, Gainesville,
in 2003 and 2004, respectively. From
2005 to 2012, he was an Assistant
Scientist at the Interdisciplinary Center for Biotechnology
Research and an affiliated faculty member at the Department
of Electrical and Computer Engineering at the University of
Florida. He is currently as Assistant Professor of
bioinformatics at the Department of Microbiology and
Immunology, Computer Science and Engineering and
Biostatistics at the State University of New York at Buffalo.
His research interests are primarily on machine leading, data
mining, bioinformatics and their applications to
metagenomics and cancer informatics.
Jian Li (S87-M91-SM97-F05) received the M.Sc. and Ph.D. degrees in
electrical engineering from The Ohio
State University, Columbus, in 1987
and 1991, respectively.
From April 1991 to June 1991, she was
an Adjunct Assistant Professor with
the Department of Electrical Engineering, The Ohio State University, Columbus. From July 1991 to
June 1993, she was an Assistant Professor with the
Department of Electrical Engineering, University of
Kentucky, Lexington. Since August 1993, she has been with
the Department of Electrical and Computer Engineering,
University of Florida, Gainesville, where she is currently a
Professor. In Fall 2007, she was on sabbatical leave at MIT,
Cambridge, Massachusetts. Her current research interests
include spectral estimation, statistical and array signal
processing, and their applications.
Dr. Li is a Fellow of IEEE and a Fellow of IET. She is a
member of Sigma Xi and Phi Kappa Phi. She received the

163

www.ijrsa.org

International Journal of Remote Sensing Applications Volume 4 Issue 4, December 2014

1994 National Science Foundation Young Investigator


Award and the 1996 Office of Naval Research Young
Investigator Award. She was an Executive Committee
Member of the 2002 International Conference on Acoustics,
Speech, and Signal Processing, Orlando, Florida, May 2002.
She was an Associate Editor of the IEEE Transactions on
Signal Processing from 1999 to 2005, an Associate Editor of
the IEEE Signal Processing Magazine from 2003 to 2005, and
a member of the Editorial Board of Signal Processing, a
publication of the European Association for Signal
Processing (EURASIP), from 2005 to 2007. She was a member
of the Editorial Board of the IEEE Signal Processing
Magazine from 2010 to 2012. She was a member of the
Editorial Board of Digital Signal Processing A Review
Journal, a publication of Elsevier, from 2006 to 2012. She is a

164

co-author of the papers that have received the First and


Second Place Best Student Paper Awards, respectively, at the
2005 and 2007 Annual Asilomar Conferences on Signals,
Systems, and Computers in Pacific Grove, California. She is
a co-author of the paper that has received the M. Barry
Carlton Award for the best paper published in IEEE
Transactions on Aerospace and Electronic Systems in 2005.
She is a co-author of the paper that has received the
Lockheed-Martin Best Student Paper Award at the 2009 SPIE
Defense, Security, and Sensing Conference in Orlando,
Florida. She is also a co-author of a paper published in IEEE
Transactions on Signal processing that has received the Best
Paper Award in 2013 from the IEEE Signal Processing
Society.

You might also like