You are on page 1of 24

sensors

Article
A Camera-Based Target Detection and Positioning
UAV System for Search and Rescue (SAR) Purposes
Jingxuan Sun, Boyang Li, Yifan Jiang and Chih-yung Wen *
Department of Mechanical Engineering, The Hong Kong Polytechnic University, Hong Kong, China;
jingxuan.j.sun@connecy.polyu.hk (J.S.); boyang.li@connect.polyu.hk (B.L.);
jiang.uhrmacher@connect.polyu.hk (Y.J.)
* Correspondence: cywen@polyu.edu.hk; Tel.: +852-2766-6644

Academic Editors: Felipe Gonzalez Toro and Antonios Tsourdos


Received: 30 August 2016; Accepted: 19 October 2016; Published: 25 October 2016
Abstract: Wilderness search and rescue entails performing a wide-range of work in complex
environments and large regions. Given the concerns inherent in large regions due to limited
rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for
providing aerial imaging. In recent years, technological advances in areas such as micro-technology,
sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one
camera-based target detection and positioning system is developed and integrated into a fully
autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time
target identification, post-target identification and location and aerial image collection for further
mapping applications. Its performance is examined using several simulated search and rescue
missions, and the test results demonstrate its reliability and efficiency.

Keywords: unmanned aerial vehicle (UAV); wilderness search and rescue; target detection

1. Introduction
Wilderness search and rescue (SAR) is challenging, as it involves searching large areas with
complex terrain for a limited time. Common wilderness search and rescue missions include searching
and rescuing injured humans and finding broken and lost cars in deserts, forests or mountains.
Incidents of commercial aircraft disappearing from radar, such as the case in Indonesia in 2014 [1–3],
also entail a huge search radius and search timeliness is critical to “the probability of finding and
successfully aiding the victim” [4–7]. This research focuses on applications common in eastern Asian
locations such as Hong Kong, Taiwan, the southeastern provinces of mainland China, Japan and
the Philippines, where typhoons and earthquakes happen a few times annually, causing landslides
and river flooding that result in significant damage to houses, roads and human lives. Immediate
assessment of the degree of damage and searching for survivors are critical requirements for
constructing a rescue and revival plan. UAV-based remote image sensing can play an important
role in large-scale SAR missions [4–6,8,9].
With the development of micro-electro-mechanical system (MEMS) sensors, the use of small
UAVs (with a wing-span of under 10 m) is a promising platform for conducting search, rescue and
environmental surveillance missions. UAVs can be equipped with various remote sensing systems,
such as powerful tools for observing disaster mitigation, including rapid all-weather flood and
earthquake damage assessment. Today, low price drones allow people to quickly develop small UAVs,
which have the following specific advantages:

• Can loiter for lengthy periods at preferred altitudes;


• Produce remote sensor data with better resolution than satellites, particularly in terms of
image quality;

Sensors 2016, 16, 1778; doi:10.3390/s16111778 www.mdpi.com/journal/sensors


Sensors 2016, 16, 1778 2 of 24

• Low cost, rapid response;


• Capable of flying below normal air traffic height;
• Can get closer to areas of interest.

Applying UAV technology and remote sensing to search, rescue and environmental surveillance
is not a new idea. Habib et al. stated the advantages of applying UAV technologies to surveillance,
security and mission planning, compared with the normal use of satellites, and various technologies
and applications have been integrated and tested on UAV-assisted operations [9–13].
A fact people cannot ignore when applying UAV-assisted SAR is the number of required operators.
It is claimed that at least two roles are required: one pilot who flies, monitors, plans and controls the
UAV, and a second pilot who operates the sensors and information flow [14]. Practically, these two roles
can be filled by a single operator, yet studies on ground robots have also suggested that a third person
is recommended to monitor and protect the operator(s). Researchers have also studied the human
behavior involved in managing multi UAVs, and have found that “the span of the human control is
limited” [4,14,15]. As a result, a critical challenge of applying multiple UAVs in SAR is simultaneously
monitoring information-rich data streams, including flight data and aerial video. The possibility of
simplifying the human roles by optimizing information presentation and automatizing information
acquisition was also explored [4], in which a fixed-wing UAV was used as a platform, and they
analyzed and compared three computer vision algorithms to improve the presentation.
To automatize the information acquisition, it has been suggested that UAV systems integrate
target-detection technologies for detecting people, cars or aircraft. A common method of observing
people is the detection of heat features, which can be achieved by applying infrared camera technology
and specifically developed algorithms. In 2005, a two-stage method based on a generalized template
was presented [16]. In the first stage, a fast screening procedure is conducted to locate the potential
person. Then, the hypothesized location of the person is examined by an ensemble classifier. In contrast,
human detection based on color imagery has also been studied for many years. The research
on developing a human detection method was conducted, which uses background subtraction,
but pre-processing is required before a search mission [17]. Another method of human detection
was presented that uses color images and models the human/flexible parts, then detects the parts
separately [18]. A combination of both thermal and color imagery for human detection was also
studied in [19].
To enhance information presentation and support humanitarian action, geo-referenced data from
disaster-affected areas is expected to be produced. Numerous different technologies and algorithms
for generating geo-referenced data via UAV have been studied and developed. A self-adaptive,
image-matching technique to process UAV video in real-time for quick natural disaster response
was presented in [20]. A prototype UAV and a geographical information system (GIS) by applying
the stereo-matching method to construct a three-dimensional hazard map was also developed [21].
Scale Invariant Features Transform (SIFT) algorithms was improved in [22] by applying a simplified
Forstner operator. Rectifying images on pseudo center points of auxiliary data were proposed in [23].
The aim of this study is to build an all-in-one camera-based target detection and positioning
system that integrates the necessary remote sensors for wilderness SAR missions into a fixed-wing
UAV. Identification and search algorithms were also developed. The UAV system can autonomously
conduct a mission, including auto-takeoff and auto-landing. The on-board searching algorithm can
report victims or cars with GPS coordinates in real-time. After the mission, a map of the hazard
area can be generated to facilitate further logistics decisions and rescue troop action. Despite their
importance, the algorithms for producing the hazard map are beyond the scope of this paper. In this
work, we focus on the possibility of using a UAV to simultaneously collect geo-referenced data and
detect victims. A hazard map and points are generated by the commercial software Pix4DmapperTM
(Pix4Dmapper Discovery version 2.0.83, Pix4D SA, Lausanne, Switzerland).
Figure 1 provides a mission flowchart. Once a wilderness SAR mission is requested to the
Ground Control System (GCS), the GCS operator designs a flight path that covers the search area and
Sensors 2016, 16, 1778 3 of 24

sends the UAV into the air to conduct the mission. During the flight, the on-board image processing
system is designed to identify targets such as cars or victims, and to report possible targets with
the corresponding GPS coordinates to the GCS within 60 m accuracy. These real-time images and
generalized GPS help the immediate rescue action including directing the victim to wait for rescue
at the current location and delivering emergency medicine, food and water. Meanwhile, the UAV
is transmitting real-time video to the GCS and recording high-resolution aerial video that can be
used, once the UAV lands, in post-processing tasks such as target identification and mapping the
affected area. The post-target identification is designed to report victims’ accurate locations within
15 m, and the map of the affected area can be used to construct a rescue plan.

Figure 1. Flowchart of a wilderness SAR mission using the all-in-one UAV.

The remainder of this paper is organized as follows. Section 2 describes the details of the UAV
system. Section 3 presents the algorithm and the implementation. Section 4 presents the tests and
results, and Section 5 concludes the paper.

2. Experimental Design
The all-in-one camera-based target detection and positioning UAV system integrates the UAV
platform, the communication system, the image system, and the GCS. The detailed hardware
construction of the UAV is introduced in this section.

2.1. System Architecture


The purpose of the UAV system developed in this study was to find targets’ GPS coordinates
within a limited amount of time. To achieve this, a suitable type of aircraft frame was needed.
The aircraft had to have enough fuselage space to accommodate the necessary payload for the task.
The vehicle configuration and material had to exhibit the good aerodynamic performance and reliable
structural strength needed for long-range missions. The propulsion system for the aircraft was
calculated and selected once the UAV’s configuration and requirements were known.
Next, a communication system, including a telemetry system, was used to connect the ground
station to the UAV. After adding the flight control system, the aircraft could take off and follow the
designed route autonomously. Finally, with the help of the mission system (auto antenna tracker (AAT),
1
Sensors 2016,
Sensors 16,16,
2016, 1778
1778 4 of4 24
of 24

(AAT), cameras, on-board processing board Odroid and gimbal), targets’ and their GPS coordinates
cameras,
could on-board
be found. processing
Figure 2 showsboard Odroid
the UAV and gimbal),
system’s targets’
systematic and theirthe
framework, GPS coordinates
details could
of which arebe
explained in the following sub-sections. The whole system weighs 3.35 kg and takes off via hand in
found. Figure 2 shows the UAV system’s systematic framework, the details of which are explained
the following sub-sections. The whole system weighs 3.35 kg and takes off via hand launching.
launching.

Figure 2. Systematic
Figure framework
2. Systematic of the
framework UAV
of the system.
UAV system.

2.2. Airframe of the UAV System


2.2. Airframe of the UAV System
The project objective was to develop a highly integrated system capable of large-area SAR
The project objective was to develop a highly integrated system capable of large-area SAR
missions. Thus, the flight vehicle, as the basic platform of the whole system, was chosen first. Given
missions. Thus, the flight vehicle, as the basic platform of the whole system, was chosen first.
the prerequisites of quick response and immediate assessment capabilities, a fixed-wing aircraft was
Given the prerequisites of quick response and immediate assessment capabilities, a fixed-wing aircraft
chosen for its high speed cruising ability, long range and flexibility in complex climatic conditions.
was chosen for its high speed cruising ability, long range and flexibility in complex climatic conditions.
To shorten the development cycle and improve system maintenance, an off-the-shelf commercial
To shorten the development cycle and improve system maintenance, an off-the-shelf commercial UAV
UAV platform “Talon” from X-UAV company was used (Figure 3). The wingspan of the Talon is 1718
platform “Talon” from X-UAV company was used (Figure 3). The wingspan of the Talon is 1718 mm
mm and the wing area is 0.062m2. The take-off weight of this airframe can reach 3.8 kg.
and the wing area is 0.06 m . The take-off weight of this airframe can reach 3.8 kg.
Sensors 2016, 16, 1778 5 of 24
Sensors 2016, 16, 1778 5 of 24

Figure 3. Overall View of X-UAV Talon [24].


Figure 3. Overall View of X-UAV Talon [24].

2.3. Propulsion System


2.3. Propulsion System
The UAV uses Sunnysky X-2820-5 motor works in conjunction with an APC 11X5.5EP propeller.
The mAh
A 10,000 UAV Lipo
uses Sunnysky
4-cell 20 C X-2820-5 motor
battery was usedworks in conjunction
and this with anprovides
propulsion system APC 11X5.5EP propeller.
a maximum
A 10,000 mAh Lipo 4-cell 20 C battery was used
cruse time of approximately 40 min at an airspeed of 18 m/s. and this propulsion system provides a maximum
cruse time of approximately 40 min at an airspeed of 18 m/s.
2.4. Navigation System
2.4. Navigation System
The main component of the navigation system is the Pixhawk flight controller running the free
The main
ArduPilot Planecomponent of the navigation
firmware, equipped with GPS system is the kit,
and compass Pixhawk
airspeed flight controller
sensor running
and a sonar for the
free ArduPilot Plane firmware, equipped with GPS and compass kit, airspeed
measuring the height below 7 m. The airplane with this navigation system can conduct a fully sensor and a sonar
for measuring
autonomous the height
mission, below
including 7 m.
auto The airplane
take-off, cruise viawith this navigation
waypoints, system
return to home can conduct
position a fully
and auto
autonomous
landing, with mission,
enhancedincluding auto take-off, cruise via waypoints, return to home position and auto
fail-safe protection.
landing, with enhanced fail-safe protection.
2.5. GCS and Data Link
2.5. GCS and Data Link
The GCS works via a data link that enables the researcher to monitor or interfere with the UAV
duringThe
anGCS
auto works
mission. via a dataPlanner,
Mission link thatanenables the researcher
open-source to monitor
ground station or interfere
application withwith
compatible the UAV
Windows,
during was installed
an auto mission.on the GCS
Mission laptop for
Planner, mission design
an open-source and monitoring.
ground An HKPilot
station application 433 Mhz with
compatible
500 Mw radio
Windows, wastransmitter
installed on and receiver
the was installed
GCS laptop on the
for mission GCS laptop,
design along with An
and monitoring. a Pixhawk
HKPilotflight
433 Mhz
controller.
500 Mw radioAn transmitter
auto antenna andtracker (AAT)
receiver wasworked in on
installed conjunction with a along
the GCS laptop, 9 dBi with
patchaantenna
Pixhawktoflight
provide a reliable
controller. An autodata link within
antenna trackera (AAT)
5-km range.
worked in conjunction with a 9 dBi patch antenna to provide
a reliable data link within a 5-km range.
2.6. Post-Imaging Processing and Video Transmission System
2.6. Post-Imaging
The UAV systemProcessing and Video
is designed with Transmission
a fixed-wing System
aircraft flying at airspeeds ranging from 15 to 25
m/s for quicker response times on SAR missions.
The UAV system is designed with a fixed-wing The ground speedflying
aircraft may reach 40 m/s inranging
at airspeeds extremefrom
weather conditions. A GoPro HERO 4 was installed in the vehicle after
15 to 25 m/s for quicker response times on SAR missions. The ground speed may reach considering the balance
40 m/s
between its weight and image quality capabilities. In a searching and mapping mission, the aerial
in extreme weather conditions. A GoPro HERO 4 was installed in the vehicle after considering the
image always faces the ground. During flight, some actions such as rolling, pitching or other
balance between its weight and image quality capabilities. In a searching and mapping mission, the
unexpected vibrations can disrupt the camera’s stability, which may lead to unclear video. A Mini
aerial image always faces the ground. During flight, some actions such as rolling, pitching or other
2D camera gimbal produced by Feiyu Tech Co., Ltd. (Guilin, China), powered by two brushless
unexpected vibrations can disrupt the camera’s stability, which may lead to unclear video. A Mini 2D
motors, was used to stabilize the camera (Figure 4). The camera (GoPro HERO 4, GoPro, Inc., San
camera gimbal produced by Feiyu Tech Co., Ltd. (Guilin, China), powered by two brushless motors,
Mateo, CA, USA) was set to video mode with a 1920 × 1080 pixel resolution in a narrow field of view
was
(FOV)used to stabilize
at 25 frames perthesecond.
cameraDuring
(Figurethe
4). flight,
The camera (GoPro
an analog HERO
image 4, GoPro,
signal is sent Inc.,
to anSan Mateo, CA,
on-screen
USA) was set to video mode with a 1920 × 1080 pixel resolution in a narrow field of view (FOV) at
25 frames per second. During the flight, an analog image signal is sent to an on-screen display (OSD)
Sensors 2016, 16, 1778 6 of 24
Sensors 2016, 16, 1778 6 of 24

and video
display (OSD)transmitter.
and videoWith a frequency
transmitter. Withofa frequency
5.8 GHz, the aerial
of 5.8 GHz,video can bevideo
the aerial visualized
can be by GCS in
visualized
Sensorsas
real-time 2016,
the 16,high-resolution
1778 video is rerecorded for use during post-processing. 6 of 24
by GCS in real-time as the high-resolution video is rerecorded for use during post-processing.
display (OSD) and video transmitter. With a frequency of 5.8 GHz, the aerial video can be visualized
by GCS in real-time as the high-resolution video is rerecorded for use during post-processing.

Figure 4. GoPro HERO 4 attached to the camera gimbal.


Figure 4. GoPro HERO 4 attached to the camera gimbal.
Figure 4. GoPro HERO 4 attached to the camera gimbal.
2.7. On-Board, Real-Time Imaging Process and Transmission System
2.7. On-Board, Real-Time Imaging Process and Transmission System
2.7. On-Board, Real-Time Imaging Process and Transmission System
A real-time imaging process and transmission system was setup on the UAV. The “oCam,”
A real-time imaging process and transmission system was setup on the UAV. The “oCam,”
(shows inAFigure
real-time 5) imaging
a 5-megaprocesspixel charge-coupled
and transmission devicesystem (CCD)
was setupcamera
on the was
UAV.chosen as the image
The “oCam,”
(shows in Figure 5) a 5-mega pixel charge-coupled device (CCD) camera was chosen as the image
source for the on-board target identification system. The focal length of the camera is 3.6 image
(shows in Figure 5) a 5-mega pixel charge-coupled device (CCD) camera was chosen as the mm and it
source for the on-board target identification system. Thefocal
focallength
length of of the camera 3.6ismm
3.6 mm
and itand it
has asource forview
field of the on-board
of 65°.

target
It weighsidentification
37 g and system.
has a The
1920 × 1080 pixel the camera
resolution iswith 30 frames per
has ahas
fieldfield
of view of 65 . It weighs
It weighs37 37ggand
and has
has aa 1920 × 1080pixel
pixel resolution withframes
30 frames
per per
second. aThe of view
development of 65°.
of the on-board image 1920 × 1080
processing resolution
was based withon 30the Odroid XU4
second. The development
second. The development of theofon-board
the on-boardimageimage
processing was based
processing was on basedthe Odroid XU4 (Hardkernel
on the Odroid XU4
(Hardkernel co., Ltd., GyeongGi, South Korea) (Figure 5b), which is a light, small, powerful
co., Ltd., GyeongGi,
(Hardkernel SouthGyeongGi,
co., Ltd., Korea) (Figure South 5b), which
Korea) is a 5b),
(Figure light, small,
which is powerful computing
a light, small, powerfuldevice
computing
computing
device
device
equipped
equipped
with
with
a a2-GHz
2-GHz
core
core
CPU
CPU
and22Gbyte
and
Gbyte LPDDR3
LPDDR3
Random-Access
Random-Access
Memory
Memory
equipped with a 2-GHz core CPU and 2 Gbyte LPDDR3 Random-Access Memory (RAM). It also
(RAM). It also
(RAM). provides
It 3.0
also provides USBUSB 3.0 interfaces that increasetransfer
transferspeeds
speeds forfor high-resolution images.
provides USB interfaces that3.0 interfaces
increase that increase
transfer speeds for high-resolution high-resolution
images. The Odroid images. XU4
The Odroid XU4XU4 used ononthethe
UAV in this
thissystem runsUbuntu
Ubuntu14.04.
14.04. The details of algorithm
the algorithmand and
usedThe Odroid
on the UAV used
in this system UAV
runsinUbuntu system runs
14.04. The details The
of the details
algorithm of the
and implementation
implementation
implementation willwill
be be
discussed
discussedininSection
Section 3.
3. The Odroidboard
The Odroid board was
was connected
connected to ato4tha Generation
4th Generation
will be discussed in Section 3. The Odroid board was connected to a 4th Generation (4G) cellular
(4G) cellular network via a HUAWEI (Shenzhen, China) E3372 USB dongle. Once target
(4G) cellular network via a HUAWEI (Shenzhen, China) E3372 USB dongle. Once the the target
is is
network via a by
identified
HUAWEI
the Odroid
(Shenzhen,
XU4, that
China) E3372
particular image
USB
is
dongle. Once
transmitted through
thethe
target
4G
is identified
cellular network
by the
identified by the Odroid XU4, that particular image is transmitted through the 4G cellular network
Odroid XU4, that particular image is transmitted through the 4G cellular network to the GCS.
to thetoGCS.
the GCS.

(a) (b)

(a) Figure 5. (a) oCam [25] and (b) Odroid XU4. (b)
Figure 5. (a) oCam [25] and (b) Odroid XU4.
Figure 5. (a)
3. Algorithm for and Implementation ofoCam
Target[25] and (b) Odroid
Identification andXU4.
Mapping
3. Algorithm for and Implementation of Target Identification and Mapping
The target identification program was implemented using an on-board micro-computer (Odroid
3. Algorithm for and Implementation of Target Identification and Mapping
The
XU4,)target
and theidentification program
ground control station. The was implemented
program usingidentify
can automatically an on-board
and reportmicro-computer
cars, people
and
(Odroid other
TheXU4,) specific
targetand targets.
the groundprogram
identification control station. The program
was implemented can an
using automatically identify and report
on-board micro-computer cars,
(Odroid
people
XU4,) andandtheother specific
ground targets.
control station. The program can automatically identify and report cars, people
3.1. Target Identification Algorithm
and other specific targets.
3.1. TargetThe
Identification
mission is to Algorithm
find victims who need to be rescued, crashed cars or aircraft. The algorithm
approaches
3.1. Target these reconnaissance
Identification Algorithm problems by using the color signature. These targets create a good
The mission is to find victims who need to be rescued, crashed cars or aircraft. The algorithm
contrast with the backgrounds due to their artificial colors. Figure 6 shows the flowchart of the
approaches these reconnaissance
The mission isalgorithm.
to find victimsproblems
who by to
need using
beYUVthe colorcrashed
rescued, signature.
cars These targetsThe
orspace
aircraft. create a good
algorithm
reconnaissance The aerial images are in rather than RGB color to identify the
contrast
approaches these reconnaissance problems by using the color signature. These targets create aofgood
with the backgrounds due to their artificial colors. Figure 6 shows the flowchart the
reconnaissance algorithm.
contrast with the The aerial
backgrounds due to images
their are in YUV
artificial ratherFigure
colors. than RGB color the
6 shows space to identify
flowchart of the
the
reconnaissance algorithm. The aerial images are in YUV rather than RGB color space to identify the
Sensors 2016, 16, 1778 7 of 24

Sensors 2016, 16, 1778 7 of 24

color signatures [26]. [26].


color signatures ThisThis
progress cancan
progress be be
achieved bybycalling
achieved callingback
back the functionprovided
the function providedbyby OpenCV
OpenCV
libraries. Both blue and red signatures are examined.
libraries. Both blue and red signatures are examined.

Figure 6. Flowchart of the identification algorithm.


Figure 6. Flowchart of the identification algorithm.
The crucial step of the algorithm is to find an appropriate value of T hreadl . A self-adapting
The crucial
method wasstep of the
applied algorithm
to the is to find
reconnaissance an appropriate
program. The identification of Threadl.
valueincluded A self-adapting
the following steps.
method was applied to the reconnaissance program. The identification included the following steps.
Step 1: Read the blue and red chrominance values (Cb and Cr layers) of the image, and determine
Step 1: Read thetheblue
maximum,
and redminimum and mean
chrominance valuesvalues of the
(Cb and chrominance
Cr layers) of the matrix. These
image, and values arethe
determine
then used
maximum, to adapt
minimum andthe threshold.
mean values of the chrominance matrix. These values are then used
Step
to2:adapt
Distinguish whether existing objects are in great contrast. The distinction is processed by
the threshold.
comparing the maximum/minimum and mean values of the chrominance. Introducing this
Step 2: Distinguish whether existing objects are in great contrast. The distinction is processed by
step improves the efficiency with which the aerial video is processed, because the relevant
comparing the maximum/minimum and mean values of the chrominance. Introducing this
identification is skipped if the criteria are not met. The criteria are expressed in Equation
step improves
(1):
the efficiency with which the aerial video is processed, because the relevant
identification is skipped if the criteria are not met. The criteria are expressed in Equation (1):
max  mean  30
(1)
mean− mean
max min < 30
> 30
(1)
Step 3: of the−threshold,
Determine the appropriate valuemean min < 30which is determined by Equation (2),
where the threshold with subscripts b and r donate blue and red, respectively. Ks is the
Step 3: Determine the appropriate value of the threshold, which is determined by Equation (2), where
sensitivity
the threshold withfactor, and the
subscripts program
b and becomes
r donate morered,
blue and sensitive as it increases.
respectively. Ks also
Ks is the sensitivity
factor, changes
and the with different
program cameras,
becomes moreand was set as
sensitive as it
0.1increases.
for the GoPro HERO
Ks also 4 andwith
changes 0.15 different
for the
oCam in this study.
cameras, and was set as 0.1 for the GoPro HERO 4 and 0.15 for the oCam in this study.

Threadlb = max − (max − mean) ∗ Ks


(2)
Threadlr = (mean − min) ∗ Ks + min
Sensors 2016, 16, 1778 8 of 24

Sensors 2016, 16, 1778 8 of 24


Threadlb  max  (max mean)*K s
(2)
Threadlr  (mean min)*K s  min
Step 4: Binarize the image with the threshold.
Step 4: Binarize the image with the threshold.
(
 0;( p 0;Threadl )
( p < Threadl )
f (fp()p) = (3)
(3)
255;( 255;
p  Threadl )
( p > Threadl )
where0 0represents
where representsthe
theblack
blackcolor
colorand
and255
255represents
represents the
the white
white color.
color.
Step 5: Examine the number of targets and their sizes. The results are abandoned if there are too
Step 5: Examine the number of targets and their sizes. The results are abandoned if there are too
many targets (over 20) in a single image because such results are typically caused by noise
many targets (over 20) in a single image because such results are typically caused by noise at
at the flight height of 80 m. The amount criterion is used because it is rare for a UAV to
the flight height of 80 m. The amount criterion is used because it is rare for a UAV to capture
capture over 20 victims or cars in a single image in the wilderness. When examining the
over 20 victims or cars in a single image in the wilderness. When examining the size of the
size of the targets, the results are abandoned if the suspected target only has a few or too
targets, the results are abandoned if the suspected target only has a few or too many pixels.
many pixels. The criterion for the number of pixels is determined by the height of the UAV
The criterion for the number of pixels is determined by the height of the UAV and the size of
and the size of the target.
the target.
Step 6: The targets are marked with blue or red circles on the original image and reported to the
Step 6: The targets are marked with blue or red circles on the original image and reported to the GCS.
GCS.

(a)

(b)

(c)

Figure 7.
Figure 7. (a)
(a) The
The original
original image
image with
with red
red target
target in
in RGB
RGB color
color space;
space; (b)
(b) the
the Cr
Cr layer
layer of
of the
the YCbCr
YCbCr color
color
space and (c) the binarized image with threshold.
space and (c) the binarized image with threshold.
Sensors 2016, 16, 1778 9 of 24

Figure 7 demonstrates a test of the target identification algorithm using an aerial image with a tiny
red target. Figure 7a is the original image captured from the aerial video with the target circled for
easy identification. The Cr data were loaded for red color, as shown in Figure 7b. Figure 7c shows the
results Sensors
of the2016, 16, 1778 image with a threshold of 0.44 (the white spot in the upper left quadrant).
binarized 9 of 24

Figure
3.2. On-Board 7 demonstrates
Target Identificationa test of the target identification algorithm using an aerial image with a
Implementation
tiny red target. Figure 7a is the original image captured from the aerial video with the target circled
Before developing
for easy theThe
identification. on-board system
Cr data were loaded forfor
identifying
red color, astargets,
shown inthe method
Figure used
7b. Figure 7cto report the
shows
targets the
andresults
theiroflocations
the binarized
to the image
GCS with
musta threshold of 0.44 (theConsidering
be determined. white spot in theall upper
of theleft quadrant). on the
subsystems
vehicle and the frequencies used for the data link (433 MHz), live video transmission (5.8 GHz) and
3.2. On-Board Target Identification Implementation
remote controller (2.4 GHz), the on-board target identification system is designed to connect to the
base stationBefore developing
of a cellular the on-board
network, 800–900 system
MHzfor in identifying
the proposed targets, the method
testing area (Hongused Kong
to report
andtheTaiwan).
targets and their locations to the GCS must be determined. Considering all of the subsystems on the
The results are then uploaded to the Dropbox server. Consequently, the on-board target identification
vehicle and the frequencies used for the data link (433 MHz), live video transmission (5.8 GHz) and
system remote
consists of four (2.4
controller modules:
GHz), the Odroid
on-boardas the core
target hardware,system
identification an oCam CCD camera,
is designed to connect a GPS
to themodule
and a dongle that connects to the 4G cellular network and provides
base station of a cellular network, 800–900 MHz in the proposed testing area (Hong Kong and it for the Odroid. The workflow
of the on-board
Taiwan). The target identification
results are then uploadedsystem, to designed
the Dropbox as server.
shownConsequently,
in Figure 8, includes
the on-board three functions:
target
identification
Self-starting, system consists
identification of fourreporting.
and target modules: Odroid as the core hardware, an oCam CCD camera,
Thea GPS module andisaachieved
self-starting dongle thatvia connects
a Linux to the 4G cellular
shell script.network and provides
The program runsitautomatically
for the Odroid. when
The workflow of the on-board target identification system, designed as shown in Figure 8, includes
Odroid is powered on. The statuses of the camera, the Internet and the GPS module are checked. After
three functions: Self-starting, identification and target reporting.
successfullyThe connecting
self-startingallisofachieved
the modules, the identification
via a Linux shell script. The program
program runs
runsonautomatically
a loop untilwhen the Odroid
is powered
Odroidoff.is The identification
powered program
on. The statuses of the usually
camera,conducts
the Internetfourandframes
the GPSin module
a second. are checked.
During the flight, connecting
After successfully the GPS coordinates of thethe
all of the modules, aircraft are directly
identification program treated
runs onas the location
a loop until the of the
targets,Odroid
because is powered
the rapid off. The identification
report is preferable program
to takingusually
the conducts
time to getfouraframes
highlyinaccurate
a second.report during
During thelocations
flight. The accurate flight, theofGPSthecoordinates
targets areofdiscovered
the aircraft are directly treated
post-flight using theas the location of the aerial
high-resolution
targets, because the rapid report is preferable to taking the time to get a highly accurate report during
video taken by the GoPro camera.
flight. The accurate locations of the targets are discovered post-flight using the high-resolution aerial
When
videoreporting,
taken by the the system
GoPro scans the resulting files every 30 s and packs the new results, which
camera.
are uploaded Whenas areporting,
package the instead
system ofscans
as frames to limit
the resulting time
files consumption,
every 30 s and packsbecause the Dropbox
the new results, which server
requiresare
verification
uploaded as fora each
packagefile.instead
The testing resultstoshow
of as frames limit that
time uploading
consumption, a package
because the every 30 s is faster
Dropbox
than uploading frame by frame. The reporting results include the images of the markedevery
server requires verification for each file. The testing results show that uploading a package target and
30 sof
a text file is the
faster thancoordinates.
GPS uploading frame by frame.
These files The
are reporting
then stored results include
in an the images
external SD cardof the marked
that allows the
target and a text file of the GPS coordinates. These files are then stored in an external SD card that
GCS to quickly check the results post-flight. Figure 9 shows a truck reported by the on-board target
allows the GCS to quickly check the results post-flight. Figure 9 shows a truck reported by the on-
identification system.
board target identification system.

Figure 8. Flowchart of the on-board target identification system.


Figure 8. Flowchart of the on-board target identification system.
Sensors 2016, 16, 1778 10 of 24
Sensors 2016, 16, 1778 10 of 24
Sensors 2016, 16, 1778 10 of 24

Figure 9. Figure
A blue 9. Atruck
blue truck reported
reported bybythe theon-board
on-board target
targetidentification system,
identification marked marked
system, by the by the
identification program with a white circle.
identification program with a white circle.
Figure 9. A blue
3.3. Post-Target truck reported
Identification by the via
Implementation on-board target
Aerial Video andidentification
Flight Log system, marked by the
3.3. Post-Target Identification
identification program Implementation
Post-target
with a white circle.via Aerial Video and Flight Log
identification is conducted using the high-resolution aerial video taken by the GoPro
camera and stored in the SD card, and the flight data log from the flight controller to capture all
Post-target identification
3.3. Post-Target Identification is conducted using
Implementation the Video
via Aerial high-resolution
and Flight aerial video taken by the GoPro
possible targets to be rescued and obtain their accurate locations. In thisLog
section, the technical details
camera and stored
of post-targetin the SD card,
identificationisare and the flight data log from the
discussed. using the high-resolution aerial flight controller to capture all
Post-target identification conducted video taken by the GoPro
possible targets to be rescued and obtain their accurate locations. In this section, the technical details
camera and stored in the SD card, and the flight data log from the flight controller to capture all
3.3.1. Target Identification
of post-target identification
possible targets are discussed.
to be rescued and obtain their accurate locations. In this section, the technical details
The altitude of the flight path is carefully determined during the flight tests via the inertial-
of post-target identification are discussed.
3.3.1. Target Identification
measurement unit and GPS data in the flight controller. Any targets coated with artificial colors of or
larger than the estimated image size (15 × 15 pixels), calculated according to the height of the UAV
3.3.1. altitude
The Target Identification
of the flight path is carefully determined during the flight tests via the
and the target’s physical dimensions, should be reported.
Figure 10of
inertial-measurement
The altitude shows
unit an aerial
and
the flightGPS image
path data a 0.8 m
of in the ×
is carefully 0.8 m controller.
flight blue board
determined duringwiththea letter
Any ‘Y’tests
targets
flight oncoated
itvia
from flight
thewith artificial
inertial-
heights ofunit
measurement 50 m, 80 m
and anddata
GPS 100 m.
in The flight
the heightcontroller.
of the flightAny
pathtargets
for the later field
coated test artificial
with was determined
colors of or of
colors of or larger than the estimated image size (15 × 15 pixels), calculated according to the height
to be lower than 80 m accordingly, otherwise, the targets would only be several pixels in the image
larger
the UAV and than the estimated image size (15 × 15 pixels), calculated according to the height of the UAV
and the target’s
might be treatedphysical
as noise.dimensions, should be reported.
and the 10
Figure target’s
shows physical dimensions,
an aerial image ofshould be ×
a 0.8 m reported.
0.8 m blue board with a letter ‘Y’ on it from flight
Figure 10 shows an aerial image of a 0.8 m × 0.8 m blue board with a letter ‘Y’ on it from flight
heights of 50 m, 80 m and 100 m. The height of the flight path for the later field test was determined to
heights of 50 m, 80 m and 100 m. The height of the flight path for the later field test was determined
be lower than 80 m accordingly, otherwise, the targets would only be several pixels in the image and
to be lower than 80 m accordingly, otherwise, the targets would only be several pixels in the image
mightand
be might
treatedbeas noise.as noise.
treated

(a)

(a)

(b)

(b)

Figure 10. Cont.


Sensors 2016, 16, 1778 11 of 24
Sensors 2016, 16, 1778 11 of 24

Sensors 2016, 16, 1778 11 of 24

(c)

Figure 10. The results of altitude tests with the vehicle cruising at (a) 50 m; (b) 80 m and (c) 100 m.
Figure 10. The results of altitude tests with the vehicle cruising at (a) 50 m; (b) 80 m and (c) 100 m.
(c)
The main loop of the post-identification program was developed in the OPENCV environment.
The main
Similar loop10.ofThe
toFigure
on-board the post-identification
results
target identification, the program
of altitude tests with was developed
the vehicle cruising
post-identification at (a) inloads
50 m; (b)
program the
80 OPENCV
m and
the(c) environment.
100 m.video
aerial file
Similar
and runs the algorithm in a loop with each frame. The targets are marked for the GCS operator, video
to on-board target identification, the post-identification program loads the aerial who file
The main loop of the post-identification program was developed in the OPENCV environment.
engages
and runs theinalgorithm
efficient confirmation.
in a loop with Theeach
flight data log
frame. Theand the aerial
targets video are
are marked forsimultaneously
the GCS operator,
Similar to on-board target identification, the post-identification program loads the aerial video file
who synchronized
engages in to determine
efficient the reference
confirmation. The frame
flight number
data andand
log reference
the shutter
aerial
and runs the algorithm in a loop with each frame. The targets are marked for the GCS operator,
time.
video areThe technical
simultaneously
who
details
synchronized of this
to step are
determine discussed
the in Section
reference 3.3.2.
frame The target
number image
and is saved
reference
engages in efficient confirmation. The flight data log and the aerial video are simultaneously
as a JPEG
shutter file
time. and
Thenamed
technical
withsynchronized
its frame number. Figure 11 shows a red target board and a green agricultural net reported
details of this step are to discussed
determine the reference 3.3.2.
in Section frame Thenumber andimage
target reference shutteras
is saved time. file andby
The technical
a JPEG named
the details
post-identification
of this step program.
are discussed This
in JPEG file
Section is The
3.3.2. senttarget
to theimage
GPS transformation
is saved as a JPEG program
file and discussed
named by the
with its frame number. Figure 11 shows a red target board and a green agricultural net reported
in Section
with its3.3.3
frame tonumber.
better position
Figure 11 theshows
target.a red target board and a green agricultural net reported by
post-identification program. This JPEG file is sent to the GPS transformation program discussed in
the post-identification program. This JPEG file is sent to the GPS transformation program discussed
Section 3.3.3 to better
in Section position
3.3.3 to the target.
better position the target.

Figure 11. A red target board and a green agricultural net reported by the post-identification program,
withFigure
both the red
11. A redand blue
target targets
board and marked with circlesnet
a green agricultural inreported
corresponding colors.
by the post-identification program,
Figure 11. A red target board and a green agricultural net reported by the post-identification program,
with both the red and blue targets marked with circles in corresponding colors.
with To
bothdetermine the blue
the red and image’s frame
targets number,
marked wecircles
with assume that the GoPro colors.
in corresponding HERO 4 camera records the
video with a fixed frame
To determine rate of 25
the image’s frames
frame per second
number, we assume(FPS) inthe
that thisGoPro
study.HERO
Thus,4 the timerecords
camera interval the( )
of the target
video withframe
a F
fixed in the
frame aerial
rate of video
25 and
frames the
per reference
second (FPS)frame
in can
this be
study.determined
To determine the image’s frame number, we assume that the GoPro HERO 4 camera recordsThus, the by
time interval ( ) the
of the target frame F in the aerial video and the reference frame can be determined by
video with a fixed frame rate ( 25 frames
= of . ) × 40 Thus,
per−second (FPS) in this study. ms (4) ( TI )
the time interval
of theand
target frame F in the = ( − . ) × 40 ms (4)
the GPS time of F is aerial video and the reference frame can be determined by
and the GPS time of F is
= − Re f erence Frame+No.) × 40 ms
TI = ( Frame Number (5) (4)
= + (5)
where the . and are determined during synchronization,
and the where
GPS timethe of F is . and are determined during synchronization,
as discussed in Section 3.3.2.
as discussed in Section 3.3.2.
Once the GPS time of the GPSTime
target frame = Reisfdetermined,
erence GPS the Time + TI and GPS coordinates of the (5)
altitude
Once the GPS time of the target frame is determined, the altitude and GPS coordinates of the
camera
camera
are determined.
are determined.
The The
yaw
yaw
angle
angle
Ψ is recorded
Ψ is recorded
as part
as part
of the
of the
Attitude messages
Attitude messages
in the flight
insynchronization,
the flight
where the
datadata Re f
log,log,erence
andand Frame
thethe No.
corresponding and Re f erence GPS Time are determined during
correspondingAttitudeAttitude message
message can be searched
can be searchedvia viaGPS GPStime.
time.The The update
update
as discussed
frequencies in Section
of the 3.3.2. messages come from an inertial-measurement unit IMU sensor, and the
Attitude
frequencies of the Attitude messages come from an inertial-measurement unit IMU sensor, and the
Once
GPS GPS the
messages GPSare
messages time
are of theThese
different.
different. target
Thesetwo frame
two types
typesisofdetermined,
of messages thebe
cannot
messages cannot altitude
be recorded
recorded and GPS coordinates
simultaneously
simultaneously to toof the
duedue
camera
the are
the determined.
control
controllogic of of
logic The
the yaw
flight
the board.Ψ
angle
flightboard. is recorded
However,
However, the as part offrequency
the updating
updating the Attitude
frequency ofofthe messages
the Attitude
Attitude inmessage
the flight
message is is data
much
log, and the
muchhigher than
corresponding
higher that
than ofAttitude
that thethe
of GPSGPSmessages,
message
messages,can thus
thusbethe attitude
thesearched message that
via GPSthat
attitude message isisclosest
time. closest
to to
The updatethethe GPS time
frequencies
GPS time of
is treated
the Attitude as the
is treated
messages vehicle’s
as the come current
vehicle’s from anattitude.
current attitude.
inertial-measurement unit IMU sensor, and the GPS messages are
different. These two types of messages cannot be recorded simultaneously due to the control logic of
the flight board. However, the updating frequency of the Attitude message is much higher than that of
the GPS messages, thus the attitude message that is closest to the GPS time is treated as the vehicle’s
current attitude.
Sensors 2016,
Sensors 16,16,
2016, 17781778 12 of
12 24
of 24

3.3.2. Synchronization of the Flight Data and Aerial Video


3.3.2. Synchronization of the Flight Data and Aerial Video
During the flight, the aerial video and flight data are recorded by the GoPro HERO 4 camera
During
and flight the flight,
controller, the aerial video
respectively. and flight
It is crucial data are recorded
to synchronize by the
the flight dataGoPro
and theHERO 4 camera
aerial video to and
flight controller, respectively. It is crucial to synchronize the flight data and
obtain the targets’ geo-information for the identification and mapping of the affected areas in a rescue the aerial video to obtain
the targets’ geo-information for the identification and mapping of the affected areas in a rescue mission.
mission.
Camera
Camera trigger
trigger distance
distance (DO_SET_CAM_TRIGG_DIST),
(DO_SET_CAM_TRIGG_DIST), a camera
a camera control
control command
command provided
provided
by ArduPlane firmware, was introduced to synchronize the
by ArduPlane firmware, was introduced to synchronize the aerial video and the flight data aerial video and the flight datalog.log.
DO_SET_CAM_TRIGG_DIST
DO_SET_CAM_TRIGG_DIST setssetsthethe distance
distance in in meters
meters between
between camera
camera triggers,
triggers, andandthethe flight
flight
control board logs the camera messages, including GPS time, GPS location
control board logs the camera messages, including GPS time, GPS location and aircraft altitude when and aircraft altitude when
thethe camera
camera is is triggered.
triggered. Compared
Compared withwith commercial
commercial quad-copters,
quad-copters, fixed-wing
fixed-wing UAVsUAVs flyfly
at at higher
higher
airspeeds.
airspeeds. TheThe timetime interval
interval between
between twotwo consecutive
consecutive images
images should
should be be small
small enough
enough to to meet
meet thethe
overlapping requirement for further mapping. However, the normal
overlapping requirement for further mapping. However, the normal GoPro HERO 4 cannot achieve GoPro HERO 4 cannot achieve
continuous
continuous photo
photo capturing
capturing at aathigha high frequency
frequency (5 Hz(5 or
Hz10orHz)10 for
Hz)longer
for longer
than 30 than 30 s Thus,
s [27]. [27]. the
Thus,
the GoPro was set to work in video recording mode with a frame rate
GoPro was set to work in video recording mode with a frame rate of 25 FPS. The mode and shutter of 25 FPS. The mode and shutter
buttons
buttons werewere modified
modified with
with a pulse
a pulse width
width modulation
modulation (PWM)-controlled
(PWM)-controlled relay
relay switch,
switch, as as shown
shown in in
Figure
Figure 12,12,
soso
thatthatthethecamera
cameracancanbe becontrolled
controlledby bythe
the flight
flight controller.
controller. The
The shutter
shutterand andits
itsduration
durationare
configured in the flight controller.
are configured in the flight controller.

Figure 12. Modification of the GoPro buttons to PWM-controlled relay switch.


Figure 12. Modification of the GoPro buttons to PWM-controlled relay switch.

The camera trigger distance can be set to any distance that will not affect the GoPro’s video
TheAcamera
recording. trigger distance
high-frequency can be set
photo capturing to any distance
command will leadthat will not
to video file affect
damage.theIn
GoPro’s video
this study,
recording. A high-frequency photo capturing command will lead to video
the flight controller sends a PWM signal to trigger the camera and record the shutter times and file damage. In this study,
the flight
positions controller
of the camerasends a PWM
messages. signal the
However, to trigger
Pixhawk therecords
camerathe and record
time that the shutter signal
the control times isand
positions
sent out, andofthere
the camera
is a delaymessages.
between However,
the image’sthe Pixhawk
recorded records
time and its the time
real that time.
shutter the control signal is
This shutter
sent out, and there is a delay between the image’s recorded time
delay was measured to be 40 ms and was introduced to the synchronization process. and its real shutter time. This shutter
delay was measured to be 40 ms and was introduced to the synchronization
The synchronization process shown in Figure 13 is conducted after the flight. The process.
The synchronization
synchronization process shown process shown in
in Figure 13Figure 13 is conducted
is conducted after theafter theThe
flight. flight. The synchronization
comparison process
process shown in Figure 13 is conducted after the flight. The comparison
started with reading the aerial video and the photograph saved in GoPro’s SD card. The process started with reading
original
the aerial
captured videowas
photo and the photograph
resized to 1920 saved in GoPro’s
× 1080 pixelsSD card. The
because theoriginal
GoProcaptured photowas
photograph was resized
of a
to 1920 × 1080 pixels because the GoPro photograph was of a nonstandard
nonstandard size of 2016 × 1128 pixels. During the comparison process, both the video frames size of 2016 × 1128 and
pixels.
During the comparison process, both the video frames and photograph were
photograph were treated as a matrix with a size of 1920 × 1080 × 3, where the number 3 denotes the treated as a matrix with
a size of
3 layers ofRGB × 1080
1920color × 3, The
space. where the number
difference 3 denotes
ε between thethe 3 layers
video frameofand RGB thecolor space.
photo was The difference
determined
ε between the video frame and
by the mean-square deviation value of ( the photo was determined
− by the mean-square deviation
). The video frame with minimum value of
( Matrixphoto − Matrixframe ). The video frame with minimum value of ε was considered the same as
value of ε was considered the same as the original aerial photo (Figure 14) and the number of this
the original aerial photo (Figure 14) and the number of this video frame was recorded as the Reference
video frame was recorded as the Reference Frame No (RFN). The recorded GPS time of sending the
Frame No (RFN). The recorded GPS time of sending the aerial photo triggering command was named
aerial photo triggering command was named as the Reference GPS time (RGT). Considering the
as the Reference GPS time (RGT). Considering the above-mentioned 40 ms delay between sending out
above-mentioned 40 ms delay between sending out the command and capturing the photo the frame
the command and capturing the photo the frame at RFN was taken at the time of (RGT + 40 ms delay
at RFN was taken at the time of (RGT + 40 ms delay time). Therefore, the video is combined with the
time). Therefore, the video is combined with the flight log.
flight log.
Sensors 2016, 16, 1778 13 of 24
Sensors 2016, 16, 1778 13 of 24
Sensors 2016, 16, 1778 13 of 24

Figure 13.13.
Figure Flowchart forfor
Flowchart thethe
synchronization of the
synchronization aerial
of the video
aerial andand
video thethe
flight data
flight log.log.
data
Figure 13. Flowchart for the synchronization of the aerial video and the flight data log.

(a) (a) (b)(b)

Figure 14.14.
Figure Comparison
Comparison results in in
results synchronization process
synchronization process(a)(a)
thethe
original photo
original taken
photo takenbybyGopro
Gopro
Figureand
camera Comparison
14. (b) video results
frame in synchronization
captured by process
synchronization (a) the original photo taken by Gopro camera
program.
camera and (b) video frame captured by synchronization program.
and (b) video frame captured by synchronization program.
3.3.3. GPS
3.3.3. GPSTransformation
Transformation to to
Locate
Locate Targets
Targets
3.3.3. GPS Transformation to Locate Targets
Once
Oncea target with
a target withitsitscurrent
current aircraft
aircraftposition
position is is
reported
reportedtotothe theGCS,
GCS,ananin-house
in-houseMatLabMatLab
locatingOnce
locating a target
program
program with
is used its
is used current
to toreport
reportaircraft
thethe position
target’s GPS
target’s is
GPS reported
coordinates. to
coordinates. the
InInGCS,
this an in-house
thisstudy,
study,the MatLaboflocating
theposition
position ofthe
the
program
aircraft is assumed to be at the center of the image, because the GPS module is placed above theis
aircraft is is used
assumed to
to report
be at the target’s
center ofGPSthe coordinates.
image, because In this
the study,
GPS the
module position
is placedof the aircraft
above the
assumed
camera. to be at the center of the image, because the GPS module is placed above the camera.
camera.
The
TheThe coverage
coverage ofofan
coverage ofan
an image
image
image cancanbebe
can estimated
estimated
be estimated using
using
using the
thethe camera’s
camera’s
camera’s field
field ofof
field ofview
viewview (FOV)
(FOV)
(FOV) [28],[28],
[28],asasasshown
shownshown
ininFigure
inFigure
Figure 15.
15. The
The
15. Thedistances
distances
distances inin
the
in the xx and
x and
the yy directions
y directions
and areare
directions estimated
estimated
are estimated using
using
using Equation
Equation
Equation (6).(6).
(6).

2 2h
2
a=== (6)
FOV
cos ( FOVX
FOV ) (6)(6)
cos(
cos(2 2 ) )
2
2h
b = 2 2FOV
= = cos( Y
FOV
FOV2 )
cos(2 ) )
cos(
The resolution of the video frame is set to be 19202 × 1080 pixels. The scale between the distance
andThe
pixels is assumed tovideo
be a linear relationship, and×is1080
bebe1920 presented inThe
Equation (7) as: the distance
resolution
The resolutionof the
of the videoframe is set
frame to to
is set 1920 × 1080pixels. scale
pixels. The between
scale between the distance
and pixels
and is assumed
pixels is assumed to to
be be
a linear relationship,
a linear relationship, and is is
and presented
presentedinin
Equation
Equation(7)(7)
as:as:
a 2h
scalex = =   (7)
1920 1920 FOV
22 2X
== ==
1920
1920 1920 FOV
FOV (7)(7)
1920 2
b 2h 2
scaley = =  
1080 1080 FOVY 2
Sensors 2016, 16, 1778 14 of 24

2
= =
1080 FOV
Sensors 2016, 16, 1778 1080 14 of 24
2

(a) (b) (c)

Figure 15. Camera and world coordinates.


Figure 15. Camera and world coordinates.

As Figure 16 shows, a target is assumed to be located on the ( , ) pixel in the photo, and the
Asof
offset Figure 16 shows,
the target a target
from the centerisofassumed is located on the ( x, y) pixel in the photo, and the
to be
the picture
offset of the target from the center of the picture is
"= ∙ #
(m) (8)
scalex · x∙
offsettarget = (m) (8)
scaley · y
For the transformation of a north-east (NE) world-to-camera frame with the angle of the , the
rotation
For matrix is defined
the transformation as
of a north-east (NE) world-to-camera frame with the angle of the Ψ, the rotation
matrix is defined as
cos(
(Ψ ) ) −−sin(
sin (Ψ ))
" #
C =cos (9)
RW = sin( (9)
sin (Ψ ) ) cos
cos((Ψ ))
where
whereΨ is the yawyaw
is the angel of the
angel aircraft.
of the Thus,
aircraft. the position
Thus, offsetoffset
the position in theinworld frameframe
the world can becan
solved with
be solved
with " #
CT PE
P = RW offsettarget = (10)
= = PN (10)

Therefore,
Therefore,the
thetarget’s
target’sGPS
GPScoordinates
coordinatescan
canbebedetermined
determinedusing
using
" / #
= +PE / f x (11)
GPStarget = GPScam + / (11)
PN / f y
where and denote the distances represented by one degree of longitude and latitude,
where f x and f y denote the distances represented by one degree of longitude and latitude, respectively.
respectively.
A graphical user interface was designed and implemented in the MatLab environment to
A graphical user interface was designed and implemented in the MatLab environment to
transform the coordinates with a simple ‘click and run’ function (Figure 17). The first step is opening
transform the coordinates with a simple ‘click and run’ function (Figure 17). The first step is opening
the image containing the targets. The program automatically loads the necessary information for the
the image containing the targets. The program automatically loads the necessary information for the
image, including the frame number (also the image’s file name), current location, camera attitude and
image, including the frame number (also the image’s file name), current location, camera attitude and
yaw angle of the plane. The second step is to click the ‘GET XY’ button and use the mouse to click the
yaw angle of the plane. The second step is to click the ‘GET XY’ button and use the mouse to click the
target in the image. The program shows the coordinates of the target in this image. Finally, clicking the
target in the image. The program shows the coordinates of the target in this image. Finally, clicking
‘GET GPS’ button provides the GPS coordinates reported by the program.
the ‘GET GPS’ button provides the GPS coordinates reported by the program.
Sensors 2016, 16, 1778 15 of 24
Sensors 2016, 16, 1778 15 of 24

Sensors 2016, 16, 1778 15 of 24

Figure 16.16.
Figure Coordinates
Coordinatesofofthe
thecamera
camera and worldframes.
and world frames.
Figure 16. Coordinates of the camera and world frames.

Figure 17. Graphical user interface for the GPS transformation that allows end users to access a
Figure 17. Graphical user interface for the GPS transformation that allows end users to access a target’s
target’s
Figure GPS coordinates
17.coordinates
Graphical using simple
usersimple
interface for buttons.
the GPS transformation that allows end users to access a
GPS using buttons.
target’s GPS coordinates using simple buttons.
3.4. Mapping the Searched Area
3.4. Mapping the Searched Area
3.4. MappingDuring rescue Area
the Searched missions following landslides or floods, the terrain features can change
During rescue missions following landslides or floods, the terrain features can change significantly.
significantly. After target identification, the local map must be re-built to guarantee the rescue team’s
After target
During identification,
rescue missions the local map landslides
must be re-built to guarantee
the the rescuefeatures
team’s safety and
safety and shorten the rescuefollowing or floods,
time. In this study, we provide a preliminary terrain
demonstration ofcan change
a fixed-
shorten
significantly. the rescue
After time.
target In this study,
identification, we provide a
thesurveillance.preliminary
local map must demonstration
be re-built of
to guarantee a fixed-wing
the rescue UAV
team’s
wing UAV used to assist in post-disaster Mapping algorithms are not discussed in this
used to assist in post-disaster surveillance. Mapping algorithms are not discussed in this paper.
safetypaper.
and shorten the rescue
The commercial time. In
software thiswas
Pix4D study,
usedweto provide
generate a preliminary
orthomosaic demonstration
models of a fixed-
and point clouds.
The commercial software Pix4D was used to generate orthomosaic models and point clouds.
To map the disaster area, a set of aerial photos and their geo-information
wing UAV used to assist in post-disaster surveillance. Mapping algorithms are not discussed are applied to the
in this
To map the disaster area, a set of aerial photos and their geo-information are applied to the
commercial
paper.commercial software,
The commercial Pix4D. There
software Pix4D should
was be at least
used to 65% overlap
generate between consecutive
orthomosaic models andpictures,
point but
clouds.
software, Pix4D. There should be at least 65% overlap between consecutive pictures,
aiming for
To the80% or higher is recommended.
of aerialThe distance between two flight paths should be smaller
butmap
aiming disaster
for 80% or area,
highera issetrecommended. photos
The and theirbetween
distance geo-information are applied
two flight paths should to be the
than , and estimation Equation (6) can be found in Section 3.3.3. A mapping image capture program
commercial
smaller software,
than a, andPix4D. There
estimation should(6)
Equation becan
at least 65%in
be found overlap
Sectionbetween consecutive
3.3.3. A mapping imagepictures,
capture but
is shown in Figure 18.
aiming for 80% or higher is
program is shown in Figure 18.recommended. The distance between two flight paths should be smaller
than , and estimation Equation (6) can be found in Section 3.3.3. A mapping image capture program
is shown in Figure 18.
Sensors 2016, 16, 1778 16 of 24
Sensors 2016, 16, 1778 16 of 24

Figure 18.
Figure Flowchart of
18. Flowchart of the
the mapping
mapping image
image capture
capture program.
program.

The mapping image capture program starts with GPS messages from the flight data log with
reference frame numbers and shutter times generated by the synchronization step discussed in
Section 3.2.
3.2. The
The program
programloads
loadsthe
theGPS
GPStimes
timesofofallall
ofof
thethe GPS
GPS messages
messages in the
in the loop
loop andand calculates
calculates the
the corresponding
corresponding frame
frame number
number inaerial
N in the the aerial
video,video,
which which equals
equals

= GPS Time − Re f erence GPS Time + + Re f erence Frame No.
.
N= 40 ms
40 ms
Then, the mapping image capture program loads the Nth frame of the aerial video and saves it to the
Then, the mapping image capture program loads the Nth frame of the aerial video and saves it to the
image file.
image file.
Once the mapping image capture program is complete, a series of photos and a text file
Once the mapping image capture program is complete, a series of photos and a text file containing
containing the file names, longitude, latitude, altitude, roll, pitch and yaw are generated. The Pix4D
the file names, longitude, latitude, altitude, roll, pitch and yaw are generated. The Pix4D then produces
then produces the orthomosaic model and point clouds using these two file types.
the orthomosaic model and point clouds using these two file types.
4.
4. Blind
BlindTests
Testsand
andResults
Results
To testthe
To test theall-in-one
all-in-onecamera-based
camera-basedtarget target detection
detection and and positioning
positioning system,
system, a blind
a blind fieldfield test
test was
was designed.
designed. A drone, a 2 ma×2 m
A drone, 2 m×blue
2 morblue
red or red square
square board
board and and
a 0.8 m a×0.8
0.8 ×
m0.8
blue orblue
redor red square
square board
were used to simulate a crashed airplane, broken cars and injured people, respectively (Figure(Figure
board were used to simulate a crashed airplane, broken cars and injured people, respectively 19a–c).
19a–c).
The flight tests were conducted at two test sites, the International Model Aviation Center
The
(22◦ 24 flight
0 58.1 tests
00 N 114 were
◦ 02 conducted
0 35.400 E) of the at two Kong
Hong test sites,
Model theEngineering
International Model
Club, Ltd.Aviation
in YuenCenter
Long
(22°24′58.1′′N 114°02′35.4′′E) of the Hong Kong Model Engineering Club,
town, Hong Kong and the Zengwun River (23◦ 70 18.0300 N 120◦ 130 53.8600 E) in the Xigang Ltd. in Yuen Long town,
District,
Hong Kong
of Tainan and
city, the Zengwun
Taiwan. River (23°7′18.03′′N
Given concerns 120°13′53.86′′E)
with the limited flying areaininthe
HongXigang
Kong, District, of Tainan
the preliminary
city, Taiwan. Given concerns with the limited flying area in Hong Kong, the preliminary
in-sight tests were conducted in Hong Kong and the main blind out-of-sight tests were conducted in-sight tests
in
were conducted in Hong Kong and the main blind out-of-sight tests were conducted in Taiwan. The
flight test information is listed in Table 1. Only post-identification tests were conducted in Hong Kong.
In Taiwan, no after-flight mapping was done for the first two tests (Tests 3 and 4).
Sensors 2016, 16, 1778 17 of 24

Taiwan. The flight test information is listed in Table 1. Only post-identification tests were conducted in
Hong 2016,
Sensors Kong.
16,In Taiwan, no after-flight mapping was done for the first two tests (Tests 3 and 4). 17 of 24
1778

Sensors 2016, 16, 1778 17 of 24

(a) (b) (c)

Figure 19. (a) The drone simulated a crashed airplane, (b) the 2 m × 2 m blue or red target boards
Figure 19. (a) The drone simulated a crashed airplane, (b) the 2 m × 2 m blue or red target boards
represented broken cars and (c) the 0.8 m × 0.8 m blue or red targets boards represented injured
represented broken cars and (c) the 0.8 m × 0.8 m blue or red targets boards represented injured people
people to be rescued. (a) (b) (c)
to be rescued.
Figure 19. (a) The drone simulated a crashed airplane, (b) the 2 m × 2 m blue or red target boards
represented broken cars andTable 1. Basic
(c) the information
0.8 minformation for
× 0.8 m bluefor flight
or flight tests.
red targets
Table 1. Basic tests. boards represented injured
people to be rescued.
Testing Function
Flight Test Test Site Flight Time (min) Real-Time Testing Function
Post-
Flight Test
Table 1. Basic information for flight
Flight Time (min) Identification
tests. Mapping
Test Site Real-Time Identification
Post-Identification Mapping
Test 1 Hong Kong 15:36 Identification
× Testing Function √ ×
√ Post-
Flight
Test Test
2 Test 1 Test
Hong KongSite Kong
Hong Flight Time15:36
3:05 (min) ××
Real-Time √ × ×
Mapping

Test 3 Test 2TaiwanHong Kong 13:23 3:05 Identification
×
√√ Identification
√ √ × ×
4 Test
Test Test 1 3Taiwan Taiwan
Hong Kong 15:36 13:23
17:41 ×
√√ √√ × × ×

Test Test
Test 4 Hong Kong
5 2 Taiwan
Taiwan 3:05 17:41 × √√ × ×
17:26 √√ √ √ √
TestTest
3 5 Taiwan
Taiwan 13:23 17:26 √√ √ ×
Test 6 Test 6Taiwan Taiwan 16:08 16:08 √ √ √ √ √
Test 4 Taiwan
Test 7 Test 7Taiwan Taiwan
17:41
16:23 16:23

√√ √ √√ √ × √
Test 5 Taiwan 17:26 √ √ √√ √ √ √
Test 8 Test 8Taiwan Taiwan 17:56 17:56 √√
Test 6 Taiwan 16:08 √ √ √
Test 7
Figure Taiwan
20a,b shows the search 16:23 site and its schematic √ in Hong Kong. The search √ path repeated√ the
Test 8 Taiwan 17:56 √ √ √
squareFigure
route20a,b
due to shows the search
the limited flightsitearea.and
The itsyellow
schematic
path inin Hong
FigureKong.
20a is theThedesigned
search path repeated
mission path
the square
and Figure
the purpleroute20a,b
line shows
to thethe
dueindicates search
limited
the real site
flight and
flight its
area.
pathschematic
The ofyellow inpath
HonginKong.
the vehicle. Figure
For theThe search
20a
tests path
isinthe repeated
designed
Taiwan, there the
mission
were
path square
and the
two main route
search due
purple
areas toline
theand
(A limited
indicates flight
B) along area.
thethe
real The
flight
bank yellow
path
of the path in Figure
of the
Zengwun vehicle.
River20a isthe
inForthe designed
the inmission
testsDistrict
Xigang Taiwan,ofpaththere
Tainan
were and the
two main
city, Taiwan, purple line
assearch
shownareasindicates the
(A and20c.
in Figure real flight
B) along path of
the bank of
The schematics the vehicle.
ofthe For
theZengwun the tests
designed River in Taiwan,
searchinroute there
the Xigang
and areaswere
District
are
two main search areasas (Ashown
and B) along the bank
20c.ofThe
the Zengwun
of Tainan
depicted incity, Taiwan,
Figure 20d. The maximum in Figure
communication distanceRiver
schematics was of in
3thethedesigned
km Xigang District
and the widthsearch of route
ofTainan and
the flight
areascity,
corridor
Taiwan, as shown in Figure 20c. The schematics of the designed search route and areas are
arewasdepicted
30 m.inThis Figure 20d.was
width Theintended
maximum to communication
test the stability distance
of the UAV was and3 kmthe and the width
geo-fencing
depicted in Figure 20d. The maximum communication distance was 3 km and the width of the flight
of the flight
function of thecorridor was 30 m.If This
flight controller. the UAV width was
flies intended
outside to test the
the corridor, it isstability
considered of the UAV crashed.
to have and the
corridor was 30 m. This width was intended to test the stability of the UAV and the geo-fencing
geo-fencing
After function of the flight controller. Ifflew
the UAV flies outside theand
corridor, it is considered to
function of the flight controller. If the UAV flies outside the corridor, it is considered to have crashed. An
the flight performance tests, the UAV inside the corridor was proven stable.
haveAfter
unknowncrashed.
number
the After
flight the flight
ofperformance
targets performance
were placed
tests, the in tests,
search
UAV the
flewareasUAV
inside flew
A and
the B inside the corridor
by an independent
corridor and was proven andstable.
was proven
volunteer before
An
stable.
every An
test. unknown
The search number
team then of targets
conducted were
the placed
field testsin search
and triedareas
to
unknown number of targets were placed in search areas A and B by an independent volunteer before findA and
the B by
targets. an
Theindependent
test results
volunteer
every before
are discussedtest. in
Theevery
the test.
following
search teamThe search
sections.
then team then
conducted conducted
the field tests andthe field
tried teststhe
to find and tried to
targets. Thefind
testthe targets.
results
The test results are
are discussed indiscussed
the following in the following sections.
sections.

(a) (a) (b)


(b)

Figure 20. Cont.


Sensors 16, 1778
2016,2016,
Sensors 16, 1778 1824of 24
18 of

(c) (d)

Figure 20. (a) Test route in Hong Kong; (b) schematics of the designed route in Hong Kong; (c) search
Figure 20.A(a)
areas andTest route
B for in tests
blind HonginKong; (b)and
Taiwan schematics of theofdesigned
(d) schematics routesearch
the designed in Hong Kong;
route (c) search
and areas in
areasTaiwan.
A and B for blind tests in Taiwan and (d) schematics of the designed search route and areas
in Taiwan.
4.1. Target Identification and Location
4.1. TargetPost-target
Identification and Locationprocessing was conducted in all eight flight test to assess the
identification
identification algorithm. The post-identification program ran on a laptop equipped with Intel Core
Post-target identification processing was conducted in all eight flight test to assess the
i5-2430M CPU and 8 Gb RAM. The testing results are shown in Table 2. Note that the post-
identification algorithm. The post-identification program ran on a laptop equipped with Intel Core
identification program only missed two targets for all of the tests.
i5-2430M CPU and 8 Gb RAM. The testing results are shown in Table 2. Note that the post-identification
program only missed two targets Table
for all2.of the tests.identification results.
Post-target

Total Post-Target
Table 2. Post-target identification results.
Flight Test Resolution Flying Altitude Flight Time (min) Targets Identified Targets
Identification
Time (min)
Test 1 1920 × 1080 80 15:36 3 2 Total11:08.6
Post-Target
Flight Test Resolution Flying Altitude Flight Time (min) Targets Identified Targets Identification
Test 2 1920 × 1080 80 3:05 2 2 02:46.3
Time (min)
Test 3 1920 × 1080 80 13:23 3 3 12:57.1
Test
Test1 4 1920×
1920 1080
× 1080 80
80 15:36
17:41 33 22 11:08.6
14:04.6
Test
Test2 5 1920×
1920 1080
× 1080 80
80 3:05
17:26 32 32 02:46.3
13:23.9
Test 3 1920 × 1080 80 13:23 3 3 12:57.1
Test 6 1920 × 1080 80 16:08 3 3 11:45.1
Test 4 1920 × 1080 80 17:41 3 2 14:04.6
Test5 7
Test 1920×
1920 × 1080
1080 80
80 16:23
17:26 63 63 12:16.9
13:23.9
Test6 8
Test 1920×
1920 × 1080
1080 75
80 17:56
16:08 63 63 14:18.3
11:45.1
Test 7 1920 × 1080 80 16:23 6 6 12:16.9
Taking
Test 8 test
19207×as an
1080 example,
75 6/6 targets 17:56
were found by6 the identification
6 system, as shown in
14:18.3
Figure 21, including a crashed aircraft, two crashed cars and three injured people. Note that in
Figure 21g the target board, representing the injured people, was folded by gusts of wind to the extent
Taking test 7 as an example, 6/6 targets were found by the identification system, as shown
that it is barely recognizable. Nevertheless, the identification system still reported this target,
in Figure 21, including a crashed aircraft, two crashed cars and three injured people. Note that in
confirming its reliability. The locating error of 5 targets was less than 15 m as shown in Table 3 (having
Figure
met21g the target board,
the requirements representing
discussed in Section the injured
1). The people,
targets was
and their foldedwere
locations by gusts of wind
reported to the
in 15 min.
extent that it is barely recognizable. Nevertheless, the identification system still reported this target,
confirming its reliability. The locating error
Table of 5 targets
3. Locating resultswas less test
of flight than 7. 15 m as shown in Table 3 (having
met the requirements discussed in Section 1). The targets and their locations were reported in 15 min.
Target Red Z Red Plane Blue I Blue V Blue J Red Q
Latitude (N) 23.114536° 23.111577° 23.110889° 23.113637° 23.122189° 23.117840°
Table 3. Locating results of flight test 7.
Longitude (E) 120.213111° 120.211898° 120.210819° 120.210463° 120.223206° 120.225225°
Error
Target 2.8 Zm
Red Red13.9 m
Plane 1.6Im
Blue 0.8Vm
Blue 11.3J m
Blue 4.8Qm
Red
Latitude (N) 23.114536◦ 23.111577◦ 23.110889◦ 23.113637◦ 23.122189◦ 23.117840◦
Longitude (E) 120.213111◦ 120.211898◦ 120.210819◦ 120.210463◦ 120.223206◦ 120.225225◦
Error 2.8 m 13.9 m 1.6 m 0.8 m 11.3 m 4.8 m
Sensors 2016, 16,
Sensors 2016, 16, 1778
1778 19 of
19 of 24
24

(a)

(b)

(c) (d) (e) (f) (g)

Figure
Figure 21.
21. (a) The locations of six simulated targets; (b) the original image saved by the identification
program with target drone; (c) designed target (blue board with letter letter V)
V) represents
represents anan injured
injured person;
person;
designedtarget
(d) designed target(blue
(blueboard
board with
with letter
letter J) represents
J) represents an injured
an injured person;
person; (e) designed
(e) designed targettarget (red
(red board
boardletter
with withQ)letter Q) represents
represents a crashed
a crashed car; (f) designed
car; (f) designed target
target (red board(red board
with with
letter letter Z) represents
Z) represents a crashed
acar
crashed
and (g)car and (g)target
designed designed target
(small blue(small
board)blue board) an
represents represents an injured
injured person. Theperson.
board wasTheblown
board over
was
blown over by the wind.
by the wind.

In
In addition
addition to
to the designed targets,
the designed targets, the
the identification
identification program
program reported
reported real
real cars/tracks,
cars/tracks, people,
people,
boats and other objects. The percentages of each type of target are shown
boats and other objects. The percentages of each type of target are shown in Figure 22. in Figure 22. The large
The large
amount of other targets is due to the nature of the search area. The testing site
amount of other targets is due to the nature of the search area. The testing site is a large area of is a large area of
cropland
cropland near
near aa river, and the
river, and the local
local farmers
farmers use
use aa type
type of
of fertilizer
fertilizer that
that is
is stored
stored in in blue
blue buckets
buckets and
and
they use green nets to fence in their crops. These two item types were reported,
they use green nets to fence in their crops. These two item types were reported, as shown in Figure 23. as shown in
Figure 23. However, these results can be quickly sifted through by the
However, these results can be quickly sifted through by the GCS operator. The identification program GCS operator. The
identification
still reduces theprogram still reduces
operator’s the and
work load, operator’s workmission
the search load, andwasthe search mission
successfully was successfully
completed in 40 min,
completed
beginning when the UAV took off and ending when all of the targets had been reported. had been
in 40 min, beginning when the UAV took off and ending when all of the targets
reported.
Sensors 2016, 16, 1778 20 of 24
Sensors 2016, 16, 1778 20 of 24
Sensors 2016, 16, 1778 20 of 24

People
People
2%
2%

Simulated
Simulated
Targets
Targets
10%
(b)(b)
Cars/Tracks
Cars/Tracks
15%
15%

Ships/Boats
Ships/Boats
6%
6%
(c)
Other (c)
Other
67%
67%

(d)
(a) (d)
(a)
Figure 22. (a) Composition of reporting targets; (b) a person on the road; (c) a red car and (d) a red
Figure (a)(a)
22.22.
Figure
Composition
Composition
ofofreporting
reporting
targets; (b) a person on the road; (c) a red car and (d) a red
targets; (b) a person on the road; (c) a red car and (d) a red
boat reported by the identification program.
boat reported by the identification program.
boat reported by the identification program.

(a) (b)
(a) reporting targets: (a) A blue bucket(b)
Figure 23. The other and (b) green nets.

Figure 23. The other reporting targets: (a) A blue bucket and (b) green nets.
In tests 3–8,
Figureboth
23.on-board
The otherreal-time
reportingprocessing
targets: (a)and post-processing
A blue bucket and (b)were
greenconducted
nets. and the
results are shown in Figure 24. Note that the performance of the post-target identification is better
In tests 3–8, both on-board real-time processing and post-processing were conducted and the
than that of real-time onboard target identification, due to the higher resolution of the image source.
In tests
results are 3–8,
shownboth on-board
in Figure 24. real-time
Note that processing and post-processing
the performance of the post-targetwere conducted
identification and the
is better
Nevertheless, the on-board target identification system still reported more than 60% of the targets
results
than are shown in Figure 24. Note that the performance of the post-target identification is better
andthat of real-time
provided onboard
an efficient target supplementary
real-time identification, due
tool to
forthe
thehigher resolution
all-in-one rescue of the image
mission. source.
A future
than that of
study real-time
Nevertheless,
will bethe onboard
on-board
conducted targetidentification
target
to improve identification, due
system
the success rates to the
still
of on-board higher
reported resolution
more ofsystems.
than 60%
target identification theofimage source.
the targets
Nevertheless,
and provided thean on-board
efficienttarget identification
real-time supplementary system still
tool forreported more than
the all-in-one rescue 60% of the targets
mission. A futureand
study
provided will
an be
4.2. Mapping conducted
efficient to improve
real-time the successtool
supplementary ratesforof the
on-board
all-in-onetarget identification
rescue mission.systems.
A future study
will be conducted
To cover the to whole
improve thearea,
search success ratesplan
the flight of on-board
was designed target as identification
shown in Figuresystems.25. The distance
4.2. Mapping
between 2 adjacent flight paths is 80 m. The total distance of flight plan is 20.5 km with a flight time
4.2. Mapping
Tomin.
of 18 cover Thetheturning
wholeradius
searchofarea, the flight
the UAV plan was designed
was calculated, and it is 50 asmshown
for bankin Figure
angles no25.larger
The distance
than
35°.
between Thus,
To cover as shown
2 adjacent
the whole insearch
flightFigure
paths 25b, the
is 80
area, m.flight
the The plan
flight total
planwas designed
distance
was withasplan
of flight
designed a shown
160-m turning
is 20.5 diameter
km with
in Figure 25.a The while
flight time
distance
of the
18 gap
min. between
The the
turning two flight
radius of paths
the UAVremained
was 80 m to
calculated, ensure
and overlapping
it is 50 m for
between 2 adjacent flight paths is 80 m. The total distance of flight plan is 20.5 km with a flight time ofand
bank complete
angles coverage.
no larger than
After the flight,
in the mapping image
18 35°.
min.Thus,
The as shown
turning radiusFigure 25b,UAV
of the the wascapture
flight plan wasprogram
calculated, anddeveloped
designed it iswith
50 m a in this study
160-m
for bankturning was
angles appliedwhile
diameter
no larger to
than
the
◦ capture
gap the
between images
the from
two the
flight high-resolution
paths remained video
80 m and
to process
ensure the flight
overlapping data
and log. A
completetotal of 2200
coverage.
35 . Thus, as shown in Figure 25b, the flight plan was designed with a 160-m turning diameter while
photos
After were
the generated
flight, and
theflightappliedimage
mapping to Pix4D, and the
capture resulting
program orthomosaic
developed model
in this and was
study pointapplied
clouds to
the gap
are
between
shown in
the two
Figure 26. The
paths part
missing
remained
is due
80the
to
m to ensure
strong
overlapping
reflection on the
and
water’s
complete
surface
coverage.
resulting
capture
After the
the images
flight, from
the the high-resolution
mapping image video program
capture and process the flightin
developed data
thislog. A total
study was ofapplied
2200
in mismatched features.
photos were generated and applied to Pix4D, and the resulting orthomosaic model and point clouds
to capture the images from the high-resolution video and process the flight data log. A total of
are shown in Figure 26. The missing part is due to the strong reflection on the water’s surface resulting
2200 photos were generated and applied to Pix4D, and the resulting orthomosaic model and point
in mismatched features.
clouds are shown in Figure 26. The missing part is due to the strong reflection on the water’s surface
resulting in mismatched features.
Sensors 2016, 16, 1778 21 of 24
Sensors
Sensors2016,
2016,16,
16,1778
1778 21ofof24
21 24

77

66

55
Amount
Target Amount

44
Target

33

22

11

00
Test33
Test Test44
Test Test 55
Test Test 6 Test
Test 77 Test
Test88
Targets
Targets Real-Time Processing
Real-Time Processing Post
Post Processing
Processing

Figure 24.Target
Figure Targetidentification
identification results
results of
of real-time
real-time processing
processing and post-processing.
Figure 24.
24. Target identification results of real-time processing and post-processing.
and post-processing.

(a)
(a)

(b) (c)
(b) (c)
Figure 25. (a) Overall flight plan for the search mission, (b) flight plan for search area B (the turning
Figure 25. (a) Overall flight plan for the search mission, (b) flight plan for search area B (the turning
Figure 25. reaches
diameter (a) Overall
160flight
m to plan forthe
ensure theflight
search mission, (b)while
performance flightthe
plan for search
distance area the
between B (the
twoturning
flight
diameter reaches 160 m to ensure the flight performance while the distance between the two flight
diameter reaches
paths remains 80160 m to ensure the
m, guaranteeing full flight performance
coverage while
and overlap) andthe
(c) distance between
flight plan thearea
for search twoA.flight
paths remains 80 m, guaranteeing full coverage and overlap) and (c) flight plan for search area A.
paths remains 80 m, guaranteeing full coverage and overlap) and (c) flight plan for search area A.
Sensors 2016, 16, 1778 22 of 24
Sensors 2016, 16, 1778 22 of 24

(a)

(b)

Figure
Figure 26. 26.
(a)(a) Orthomosaicmodel
Orthomosaic modelof
ofthe
the testing
testing area
area and
and(b)
(b)point
pointclouds of of
clouds thethe
search area.
search area.

5. Conclusions
5. Conclusions
In this study, a UAV system was developed, and its ability to assist in SAR missions after
In this study, a UAV system was developed, and its ability to assist in SAR missions after disasters
disasters was demonstrated. The UAV system is a data acquisition system equipped with various
was sensors
demonstrated.
to realize The UAV and
searching system is a data acquisition
geo-information acquisitionsystem equipped
in a single with
flight. The various
system sensors to
can reduce
realize searching and geo-information acquisition in a single flight.
the cost of large-scale searches, improve the efficiency and reduce end-users’ workloads.The system can reduce the cost of
large-scale searches, improve the efficiency and reduce end-users’ workloads.
In this paper, we presented a target identification algorithm with a self-adapting threshold that
In this
can paper,towe
be applied presented
a UAV system.a Based
targeton identification
this algorithm, algorithm with a self-adapting
a set of programs was developedthreshold
and testedthat
can bein aapplied
simulated to asearch
UAV mission.
system. TheBased teston this algorithm,
results demonstrated a set theofreliability
programs was
and developed
efficiency and
of this tested
new
UAV system.
in a simulated search mission. The test results demonstrated the reliability and efficiency of this new
UAV system. A further study will be conducted to improve the image processing in both onboard and post
target
A furtheridentification,
study will focusing
be conductedon reducing
to improve the theunexpected reportingintargets.
image processing A proposed
both onboard and post
optimization method is to add an extra filtration process to the GCS to further
target identification, focusing on reducing the unexpected reporting targets. A proposed optimization identify the shape of
the targets. This proposed method will not increase the computational time of the onboard device
method is to add an extra filtration process to the GCS to further identify the shape of the targets.
significantly. It is a simple but effective method concerning the limited CPU capability of an on-board
This proposed method will not increase the computational time of the onboard device significantly.
processor. Generally speaking, most commercial software is too comprehensive to be used in the on-
It is board
a simple butNotably,
device. effectivethemethod concerning
limitation the limited
of the computing powerCPU capability
becomes a minorof consideration
an on-board during
processor.
Generally speaking, most commercial software is too comprehensive to
post-processing since powerful computing devices can be used at this stage. To evaluate and improve be used in the on-board
device. Notably, theoflimitation
the performance of the computing
targets’ identification algorithmpower becomes a minor
in post-processing, consideration
further study will during
be
post-processing since powerful
conducted, including computing
the application of the devices
parallelcan be used technology
computing at this stage.andTocomparison
evaluate and withimprove
the
the performance of targets’
advanced commercial identification algorithm in post-processing, further study will be conducted,
software.
including the application of theofparallel
In this study, the scales the camera and world
computing coordinates
technology andwere assumed towith
comparison be linear. This
the advanced
assumption
commercial can result in target location errors. We tried to reduce the error by selecting the image
software.
with the target
In this study,near
the the image
scales of center. Although
the camera andtheworld
error of the current were
coordinates systemassumed
is acceptable
to be forlinear.
a
search mission, we will conduct a further study to improve the location
This assumption can result in target location errors. We tried to reduce the error by selecting the imageaccuracy. Lidar will be
installed to replace the sonar, and more accurate relative vehicle height will be provided for auto-
with the target near the image center. Although the error of the current system is acceptable for a search
landing. Also, in the future, the vehicle will be further integrated to realize the ‘Ready-to-Fly’ stage
mission, we will conduct a further study to improve the location accuracy. Lidar will be installed
for quick responses in real applications.
to replace the sonar, and more accurate relative vehicle height will be provided for auto-landing.
Also, in the future, the vehicle will be further integrated to realize the ‘Ready-to-Fly’ stage for quick
responses in real applications.
Sensors 2016, 16, 1778 23 of 24

Supplementary Materials: The following is available online at https://www.youtube.com/watch?v=19_


-RyPp93M. Video S1: A Camera-Based Target Detection and Positioning System for Wilderness Search and Rescue
using a UAV. https://github.com/jingego/UAS_system/tree/master/Image%20Processing. Source Code 1:
Matlab Code of targets identification. https://github.com/jingego/UAS_system/blob/master/Mapping_pre-
process/CAM_clock_paper_version.m. Source Code 2: MatLab Code of synchronization.
Acknowledgments: This work is sponsored by Innovation and Technology Commission, Hong Kong under
Contract No. ITS/334/15FP. Special thanks to Jieming Li for his help in building the image identification
algorithm of this work.
Author Contributions: Jingxuan Sun and Boyang Li designed the overall system. In addition, Boyang Li
developed the vehicle platform and Jingxuan Sun developed the identification algorithms, locating algorithms
and post image processing system. Yifan Jiang developed the on-board targets identification. Jingxuan Sun and
Boyang Li designed and performed the experiments. Jingxuan Sun analyzed the experiment results and wrote the
paper. Chih-yung Wen is in charge of the whole project management.
Conflicts of Interest: The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript:
UAV: Unmanned Aerial Vehicle
SAR: Search and Rescue
GCS: Ground Control System
AAT: Auto Antenna Tracker
OSD: On Screen Display
FOV: Field of view
FPS: Frame Per Second

References
1. Indonesia Airasia Flight 8501. Available online: https://en.wikipedia.org/wiki/Indonesia_Air-Asia_Flight_
8501 (accessed on 20 October 2016).
2. Qz8501: Body of First Victim Identified. Available online: http://english.astroawani.com/airasia-qz8501-
news/qz8501-body-first-victim-identified-51357 (accessed on 20 October 2016).
3. Airasia Crash Caused by Faulty Rudder System, Pilot Response, Indonesia Says. Available online:
https://www.thestar.com/news/world/2015/12/01/airasia-crash-caused-by-faulty-rudder-system-
pilot-response-indonesia-says.html (accessed on 20 October 2016).
4. Goodrich, M.A.; Morse, B.S.; Gerhardt, D.; Cooper, J.L.; Quigley, M.; Adams, J.A.; Humphrey, C. Supporting
wilderness search and rescue using a camera-equipped mini uav. J. Field Robot. 2008, 25, 89–110. [CrossRef]
5. Goodrich, M.A.; Cooper, J.L.; Adams, J.A.; Humphrey, C.; Zeeman, R.; Buss, B.G. Using a mini-uav to
support wilderness search and rescue: Practices for human-robot teaming. In Proceedings of the 2007 IEEE
International Workshop on Safety, Security and Rescue Robotics, Rome, Italy, 27–29 September 2007.
6. Goodrich, M.A.; Morse, B.S.; Engh, C.; Cooper, J.L.; Adams, J.A. Towards using unmanned aerial vehicles
(UAVs) in wilderness search and rescue: Lessons from field trials. Interact. Stud. 2009, 10, 453–478.
7. Morse, B.S.; Engh, C.H.; Goodrich, M.A. Uav video coverage quality maps and prioritized indexing for
wilderness search and rescue. In Proceedings of the 5th ACM/IEEE international conference on Human-robot
interaction, Osaka, Japan, 2–5 March 2010.
8. Doherty, P.; Rudol, P. A uav search and rescue scenario with human body detection and geolocalization.
In Proceedings of the Australasian Joint Conference on Artificial Intelligence, Gold Coast, Australia,
2–6 December 2007.
9. Habib, M.K.; Baudoin, Y. Robot-assisted risky intervention, search, rescue and environmental surveillance.
Int. J. Adv. Robot. Syst. 2010, 7, 1–8.
10. Tomic, T.; Schmid, K.; Lutz, P.; Domel, A.; Kassecker, M.; Mair, E.; Grixa, I.L.; Ruess, F.; Suppa, M.; Burschka, D.
Toward a fully autonomous uav: Research platform for indoor and outdoor urban search and rescue.
IEEE Robot. Autom. Mag. 2012, 19, 46–56. [CrossRef]
11. Waharte, S.; Trigoni, N. Supporting search and rescue operations with uavs. In Proceedings of the IEEE 2010
International Conference on Emerging Security Technologies (EST), Canterbury, UK, 6–7 September 2010.
Sensors 2016, 16, 1778 24 of 24

12. Naidoo, Y.; Stopforth, R.; Bright, G. Development of an uav for search & rescue applications. In Proceedings
of the IEEE AFRICON 2011, Livingstone, Zambia, 13–15 September 2011.
13. Bernard, M.; Kondak, K.; Maza, I.; Ollero, A. Autonomous transportation and deployment with aerial robots
for search and rescue missions. J. Field Robot. 2011, 28, 914–931. [CrossRef]
14. Cummings, M. Designing Decision Support Systems for Revolutionary Command and Control Domains.
Ph. D. Thesis, University of Virginia, Charlottesville, VA, USA, 2004.
15. Olsen, D.R., Jr.; Wood, S.B. Fan-out: Measuring human control of multiple robots. In Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004.
16. Davis, J.W.; Keck, M.A. A two-stage template approach to person detection in thermal imagery.
WACV/MOTION 2005, 5, 364–369.
17. Lee, D.J.; Zhan, P.; Thomas, A.; Schoenberger, R.B. Shape-based human detection for threat assessment.
In Proceedings of the SPIE 5438, Visual Information Processing XIII, Orlando, FL, USA, 15 July 2004.
18. Mikolajczyk, K.; Schmid, C.; Zisserman, A. Human detection based on a probabilistic assembly of robust
part detectors. In European Conference on Computer Vision, Proceedings of the 8th European Conference
on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Springer: Berlin/Heidelberg, Germany;
pp. 69–82.
19. Rudol, P.; Doherty, P. Human body detection and geolocalization for uav search and rescue missions using
color and thermal imagery. In Proceedings of the 2008 IEEE Aerospace Conference, Montana, MT, USA,
1–8 March 2008.
20. Wu, J.; Zhou, G. Real-time uav video processing for quick-response to natural disaster. In Proceedings of
the 2006 IEEE International Conference on Geoscience and Remote Sensing Symposium, Denver, CO, USA,
31 July–4 August 2006.
21. Suzuki, T.; Meguro, J.; Amano, Y.; Hashizume, T.; Hirokawa, R.; Tatsumi, K.; Sato, K.; Takiguchi, J.-I.
Information collecting system based on aerial images obtained by a small uav for disaster prevention.
In Proceedings of the 2007 International Workshop and Conference on Photonics and Nanotechnology,
Pattaya, Thailand, 16–18 December 2007.
22. Xi, C.; Guo, S. Image target identification of uav based on sift. Proced. Eng. 2011, 15, 3205–3209.
23. Li, C.; Zhang, G.; Lei, T.; Gong, A. Quick image-processing method of uav without control points data in
earthquake disaster area. Trans. Nonferrous Metals Soc. China 2011, 21, s523–s528. [CrossRef]
24. United Eagle Talon Day Fatso FPV Carrier. Available online: http://www.x-uav.cn/en/content/?463.html
(accessed on 20 October 2016).
25. Hardkernel Co., Ltd. Ocam: 5mp USB 3.0 Camera. Available online: http://www.hardkernel.com/main/
pro-ducts/prdt_info.php?g_code=G145231889365 (accessed on 20 October 2016).
26. Chen, Y.; Hsiao, F.; Shen, J.; Hung, F.; Lin, S. Application of matlab to the vision-based navigation of UAVs.
In Proceedings of the 2010 8th IEEE International Conference on Control and Automation (ICCA), Xiamen,
China, 9–11 June 2010.
27. Gopro hero4 Silver. Available online: http://shop.gopro.com/APAC/cameras/hero4-silver/CHDHY-401-
EU.html (accessed on 20 October 2016).
28. Hero3+ Black Edition Field of View (FOV) Information. Available online: https://gopro.com/support/
articles-/hero3-field-of-view-fov-information (accessed on 20 October 2016).

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like