You are on page 1of 16

remote sensing

Article
Automatic Assessment of Green Space Ratio in Urban
Areas from Mobile Scanning Data
Junichi Susaki 1, * and Seiya Kubota 2
1 Graduate School of Engineering, Kyoto University, C1-1-206, Kyotodaigaku-Katsura, Nishikyo-ku,
Kyoto 615-8540, Japan
2 Graduate School of Engineering, Kyoto University, C1-1-209, Kyotodaigaku-Katsura, Nishikyo-ku,
Kyoto 615-8540, Japan; kubota.seiya.57x@st.kyoto-u.ac.jp
* Correspondence: susaki.junichi.3r@kyoto-u.ac.jp; Tel.: +81-75-383-3300

Academic Editors: James Campbell and Prasad S. Thenkabail


Received: 12 December 2016; Accepted: 23 February 2017; Published: 27 February 2017

Abstract: In this paper, we propose a method for using mobile laser-scanning data to estimate
the green space ratio (GSR), a landscape index that represents the proportion of green area to the
whole-view area. The proposed method first classifies and segments vegetation using voxel-based
and shape-based approaches. Vertical planar-surface objects are excluded, and randomly distributed
objects are extracted as vegetation via multi-spatial-scale analysis. Then, the method generates a map
representing occlusion by vegetation, and estimates GSR at an arbitrary location. We applied the
method to a data set collected in a residential area in Kyoto, Japan. We compared the results with
the ground truth data and obtained a root mean squared error of approximately 4.1%. Although
some non-vegetation with rough surfaces was falsely extracted as vegetation, our method seems to
estimate GSR to an acceptable accuracy.

Keywords: green space ratio; mobile scanning data; urban landscape; multi-scale analysis

1. Introduction
In urban planning, provision of green space (GS) in urban areas is one of the most challenging
issues, whereas urban green space (UGS) provides valuable direct and indirect services to the
surrounding areas [1]. For example, Nishinomiya City in Hyogo Prefecture, Japan is promoting
improvement and maintenance of its landscapes. It enacted a regulation for one residential area to
the effect that newly constructed houses should have a green space ratio (proportion of vegetation
to visible area: GSR) of at least 15%. The regulation specifies a simple formula for calculating the
green space ratio [2]. Local governments assess landscape quality from the perspective of aesthetics
before approving new construction. The landscape index is calculated through ground surveys by
local government officials. However, such surveys are time-consuming and expensive when applied
over wide areas, making it difficult to apply the regulation to the entire city and consequently promote
more UGS.
Automatic measures of UGS come mainly from geographic information system (GIS) data
and remotely sensed data [3]. For example, Tian et al. [4] used high-quality digital maps with a
spatial resolution of 0.5 m × 0.5 m to analyze the landscape pattern of UGS for ecological quality.
Gupta et al. [5] calculated an urban neighborhood green index to quantify homogenous greenness
from multi-temporal satellite images. The periodically obtained remotely sensed imagery is suitable
for updating the spatial distribution patterns of GS, but it is limited in that the three-dimensional (3D)
distribution is not directly obtained.
As a tool for directly measuring 3D coordinate values, light detection and ranging (LiDAR)
measures laser light reflected from the surfaces of objects. The discrete LiDAR data are used to model

Remote Sens. 2017, 9, 215; doi:10.3390/rs9030215 www.mdpi.com/journal/remotesensing


Remote Sens. 2017, 9, 215 2 of 16

the 3D surfaces of objects and derive their attributes. Now, airborne and terrestrial LiDAR are of
operational use. Terrestrial LiDAR is stationary or mobile by vehicle. Applications of LiDAR data to
landscape analysis require the extraction of vegetation via classification and segmentation of point
clouds. In the case of airborne LiDAR, vegetation returns the light in various ways—from the surface,
middle and bottom—whereas buildings return the light mainly from the surface. This unique feature
of multiple returns is a challenge to applying LiDAR data to vegetation extraction. It is well known
that the first and last pulses of the light reflected from vegetation correspond to the top (canopies) and
bottom (ground) of the vegetation, respectively, which allows the heights of the vegetation surface
and ground to be estimated. Thus, we can derive the heights of vegetation by subtracting the ground
height from the vegetation surface height. Over the last decade, full-waveform airborne LiDAR has
been examined [6,7], which can provide more detailed patterns of reflected light and has the potential
to estimate the structure of forests.
In a complex urban area, the automatic extraction of vegetation requires the classification of
man-made and natural objects. This is another challenge to applying LiDAR data to urban vegetation.
One of the most promising approaches is to process the LiDAR data on multiple scales [8–10].
For example, Brodu and Lague [11] presented a method to monitor the local cloud geometry behavior
across several scales by changing the diameter of the sphere for representing local features. Wakita and
Susaki [12] proposed a multi-scale and shape-based method to extract vegetation in complex urban
areas from terrestrial LiDAR data.
Landscape indices related to GS can be calculated from the vegetation extracted from point clouds
or other sources. Susaki and Komiya [13] proposed a method to estimate the green space ratio (GSR)
in urban areas from airborne LiDAR and aerial images intended for the quantitative assessment of
local landscapes. The GSR is defined as the ratio of the area occluded by vegetation to the entire
visible area at a height of a person on the ground. Following Wakita and Susaki [12], Wakita et al. [14]
developed a method to estimate GSR from terrestrial LiDAR data. Huang et al. [15] extracted urban
vegetation using point clouds and remote sensing image. Individual tree crowns were extracted using
the normalized digital surface model from airborne LiDAR data and the normalized vegetation index
(NDVI) from near-infrared image. Yang et al. [16] estimated Green View index using field survey data
and photographs. Yu et al. [17] presented the Floor Green View Index, an indicator defined as the
area of visible vegetation on a particular floor of a building. This index is calculated from airborne
LiDAR data and aerial near-infrared photographs to calculate NDVI. Because of occlusion, more
accurate extraction of vegetation can be achieved using terrestrial LiDAR data. In addition to terrestrial
stationary LiDAR, mobile (or vehicle-based) LiDAR has been examined for this purpose because it is
capable of rapidly measuring the data in a large area [18]. Yang and Dong [19] proposed a method to
segment mobile LiDAR data into objects using shape information derived from the data. However,
mobile LiDAR data have a wide range of point density, and thus the estimation of vegetation may
tend to be unstable when there is vegetation in the far distance. Therefore, in this research, we present
a method to extract vegetation in complex urban areas and estimate a GSR from mobile LiDAR data.

2. Data Used and Study Area


We used a mobile LiDAR system, Trimble MX5, which is a vehicle-mounted LiDAR whose
angles were set to 30◦ for pitch rotation and 0◦ for heading rotation. The height of LiDAR was
set to approximately 2.3 m. It measures 550,000 points per second at a maximum distance of
800 m. The system also has three cameras, which were set to −15◦ , 0◦ and 15◦ for heading rotation.
The camera resolution was five million pixels. The measurements were carried out on 3 March 2014
in the Higashiyama Ward of Kyoto, Japan. Higashiyama contains plenty of GS around traditional
temples and shrines. Figure 1 shows the study area.
To assess the accuracy of the GSR estimated in this research, we used images taken on 11 April
2015 using a camera with a fisheye lens. Although there was almost a year between the mobile LiDAR
and image data, we compared the equivalent color information in the point clouds and the camera
Remote Sens. 2017, 9, 215 3 of 16

Remote Sens.
images and 2017, 9, 215
concluded that the effect of the time gap was not significant. The camera used3was
of 16 an

EOS Kiss X3 by Canon, and the fisheye lens was a 4.5-mm F2.8 EX DC Circular Fisheye by Sigma.
was an EOS Kiss X3 by Canon, and the fisheye lens was a 4.5-mm F2.8 EX DC Circular Fisheye by
We selected 18 points as assessment positions, labeled P1 to P18 in Figure 1. We took two images
Sigma. We selected 18 points as assessment positions, labeled P1 to P18 in Figure 1. We took two
covering the forward and backward views at each position. We manually colored the vegetation areas
images covering the forward and backward views at each position. We manually colored the
and converted two images into one panoramic image, from which we calculated the measured GSR.
vegetation areas and converted two images into one panoramic image, from which we calculated
Atthe
each position,GSR.
measured we measured actual GSR
At each position, and estimated
we measured actual GSR
GSR from LiDAR data.
and estimated GSRThe
frommeasured GSRs
LiDAR data.
were used as the ground truth for validating the estimated GSR results.
The measured GSRs were used as the ground truth for validating the estimated GSR results.

(a)

Figure
Figure 1. 1. Studyarea:
Study area:(a)
(a)Kyoto
Kyotocity
cityininJapan,
Japan,shown
shownininyellow.
yellow.The
Theblack
blackrectangle
rectanglecorresponds
correspondstotothe
theshown
area area shown
in (b).in(b)
(b). (b) Assessment
Assessment positions
positions are marked
are marked by yellow
by yellow pins.pins.
ImageImage is from
is from Google
Google Earth.
Earth.
Remote Sens. 2017, 9, 215 4 of 16

3. Green
RemoteSpace Ratio
Sens. 2017, 9, 215 (GSR) 4 of 16

Assume
3. that the
Green Space Ratio vegetation
(GSR) distribution in 3D coordinates is known in advance. We vary the
azimuth from 0◦ to 360◦ and determine the maximum and minimum elevation angles at which
Assume that the vegetation distribution in 3D coordinates is known in advance. We vary the
the view is occluded by vegetation. Figure 2a) shows the viewable area from the perspective of a
azimuth from 0° to 360° and determine the maximum and minimum elevation angles at which the
person facing in the direction of the azimuth φ. We assume that φ and the elevation angle θ are
view is occluded by vegetation. Figure 2a) shows the viewable area from the perspective of a person
uniformly divided
facing in into intervals
the direction ∆φ and
of the azimuth φ. ∆θ,
We respectively.
assume that φWe andvary the value
the elevation of φθinare
angle steps of ∆φ from
uniformly

0 todivided◦
360 —∆φ, and search for vegetation points
into intervals ∆φ and ∆θ, respectively. Wealong each
vary the ray of
value at φangle φ within
in steps the maximum
of ∆φ from 0° to
range Dmax . At
360°—∆φ, every
and searchvegetation point,
for vegetation we can
points alongcalculate
each raythe values
at angle of θ at the
φ within which occlusion
maximum rangeoccurs
Dmax.of
because Atvegetation
every vegetation point, wedetermine
and thereby can calculate
thethe values of θelevation
maximum at which occlusion occurs
angle θ max andbecause of
the minimum
vegetation and thereby determine the maximum elevation angle θ and the minimum
elevation angle θ min within the maximum range Dmax (Figure 2a). If multiple populations of vegetation
max elevation
exist,angle θmin within and
the maximum the maximum
minimumrange Dmax (Figure
elevation angles2a). If multiple
occluded populations
by each one areofexamined
vegetation(Figure
exist, 2b).
the maximum and minimum elevation angles occluded by each one are examined (Figure 2b). As a
As a result, we can generate a map similar to the one shown in Figure 3 (referred to hereinafter as an
result, we can generate a map similar to the one shown in Figure 3 (referred to hereinafter as an
occlusion map). The GSR in azimuth–elevation angle space is given by
occlusion map). The GSR in azimuth–elevation angle space is given by
A2
A
GSR==
GSR 2
×× 100,
100 , (1) (1)
A ++AA2
A
1
1
2

where A1 and
where A2Adenote
A1 and the non-vegetation area and the occluded vegetation area, respectively,
2 denote the non-vegetation area and the occluded vegetation area, respectively, in
in azimuth–elevation anglespace.
azimuth–elevation angle space. According
According to Equation
to Equation (1),GSR
(1), the thecan
GSR can
take take
any anybetween
value value between
0%
0% and
and 100%.
100%.

(a)
θmax

z
h θmin

x
Dmax

y
φ
x

e1
(b)
e2

e3
z
h e4

x
Dmax

Figure 2. Maximum elevation angle θmax at which occlusion by vegetation is present from the
Figure 2. Maximum elevation angle θ max at which occlusion by vegetation is present from the viewpoint
viewpoint of a human of height h along azimuth φ within the maximum range Dmax. (a) Minimum
of a human of height h along azimuth φ within the maximum range Dmax . (a) Minimum elevation
elevation angle θmin is estimated by referring to the ground surface height of a point where θmax is
angle θ min is estimated by referring to the ground surface height of a point where θ max is observed
observed in the case where no object exists between the human and vegetation. (b) In the case where
in the case where
multiple no object
populations of exists between
vegetation exist, the humanand
maximum andminimum
vegetation. (b) In the
elevation caseoccluded
angles where multiple
by
populations
each one are examined. In (b), e1 and e3 denote the eye sights at the maximum elevation each
of vegetation exist, maximum and minimum elevation angles occluded by angles,one are
examined. In (b), e and e denote the eye sights at the maximum elevation
whereas e2 and e14 denote3 the eye sights at the minimum elevation angles. angles, whereas e 2 and e4
denote the eye sights at the minimum elevation angles.
Remote Sens. 2017, 9, 215 5 of 16
RemoteSens.
Remote Sens.2017,
2017,9,9,215
215 55ofof16
16

Elevationangle
angle(deg)
(deg)
Elevation AA22
GreenSpace
Green SpaceRatio
Ratio(GSR)
(GSR)==
θθ AA1++AA2
1 2
90
90
AA22: :vegetation
vegetationarea
area

00

AA11: :non-vegetation
non-vegetationarea
area
-90
-90
φφ
-180
-180 00 180
180
Azimuthangle
Azimuth angle(deg)
(deg)

Occlusionmap
Figure3.3.Occlusion
Figure Occlusion mapfor
map forcalculating
for calculatinggreen
calculating greenspace
green spaceratio
space ratio(GSR)
ratio (GSR)inin
(GSR) inazimuth–elevation
azimuth–elevationangle
azimuth–elevation anglespace.
angle space.
space.
GSR is
GSR is defined
defined as
as the ratio
theratio of
ratioof occluded
ofoccluded vegetation
occludedvegetation area
vegetationarea to
areato the
tothe entire
theentire area.
entirearea.
area.
GSR is defined as the

4.4.Methodology
Methodologyfor
forEstimating
EstimatingGSR
GSRfrom
fromMobile
MobileScanning
ScanningData
Data
Figure 444 shows
Figure showsthe
shows themethod
the method
method forfor
for estimating
estimating
estimatingGSRGSR
GSR in this
in this
in this research.
research. First,
First,First,
research. itit classifies
it classifiesclassifies theclouds
the point
the point
point
clouds
measured
clouds measured
measured using LiDAR
using mobile
using mobileinto
mobile LiDAR
LiDAR into vegetation
vegetation
into vegetation and non-vegetation.
and non-vegetation.
and non-vegetation.
Then, assuming Then, assuming
Then, assuming
the position the
of
the
position
a of
viewpoint, a viewpoint,
it generates it generates
an occlusion an occlusion
map map
indicating indicating
how much how much
vegetation vegetation
is
position of a viewpoint, it generates an occlusion map indicating how much vegetation is available available is
in available
the view.
in theview.
view.
Finally,
in the Finally,itGSR
it calculates
Finally, itcalculates
calculates GSRfor
for the GSR forthe
viewpoint. theviewpoint.
viewpoint.

MobileLiDAR
Mobile LiDARdata
data

Resampling
Resampling
Voxels
Voxels Calculationofof
Calculation
occlusionby
occlusion by
Segmentation&&
Segmentation vegetation
Classification vegetation
Classification
Occlusionmap
Occlusion map
Vegetationvoxels
Vegetation voxels

Aggregation
Aggregation

Clusters
Clusters GreenSpace
Green SpaceRatio
Ratio(GSR)
(GSR)

(I)Vegetation
(I) Vegetationextraction
extraction (II)GSR
(II) GSRestimation
estimation

Figure
Figure4.4.Flowchart
Figure Flowchart
Flowchartofof the
ofthe proposed
theproposed methodfor
proposedmethod forestimating
estimatingGSR
GSRfrom
frommobile
mobilescanning
scanningdata.
data.

4.1.
4.1. Vegetation
4.1.Vegetation Extraction
VegetationExtraction
Extraction
The
The extraction
The extraction
extractionof of vegetation isis
of vegetation
vegetation based on
is based
based on volumetric
on volumetric pixel
volumetric pixel (voxel)-based
pixel (voxel)-based analysis
(voxel)-based analysis to
analysis to reduce
to reduce
reduce
computational
computational
computationaltime. time. Voxels
time.Voxels isaaacuboid
Voxelsisis cuboid volumetric
cuboidvolumetric element,
volumetricelement,
element,and and
anditit
itisis
isanan effective
aneffective approach
effectiveapproach
approachto to process
toprocess
process
huge
huge amount
hugeamount of point
amountofofpoint clouds
pointclouds
cloudsby by assigning
byassigning
assigningpointpoint clouds
pointclouds
cloudsto to voxels
tovoxels
voxels[20].[20].
[20].The The flowchart
Theflowchart
flowchartof of vegetation
ofvegetation
vegetation
extraction
extractionisis
extraction shown
isshown
shownin in Figure
inFigure 5.Local
Figure5.5. Local and
Localand contextual
andcontextual features
contextualfeatures
featuresare are usedto
areused
used toclassify
to classifypoint
classify pointclouds.
point clouds.Local
clouds. Local
Local
features
features are
featuresare calculated
arecalculated using
usingaaaset
calculatedusing set of
setof pointsin
ofpoints
points ineach
in eachvoxel.
each voxel.AA
voxel. planar
Aplanar surface
surfaceisis
planarsurface fitted
isfitted to
fittedto the
tothe points
thepoints
pointstoto
to
calculate
calculate a normal
calculateaanormal vector
normalvector
vectorandand express
andexpress
expressthe the
the3D3D distribution
3Ddistribution characteristics
distributioncharacteristics
characteristicsusing using principal
usingprincipal component
principalcomponent
component
analysis
analysis (PCA).
analysis(PCA).
(PCA).The The contextual
Thecontextual features
contextualfeatures
featuresare are derived
arederived from
derivedfrom
fromthethe horizontality
thehorizontality
horizontalityof of the
ofthe normal
thenormal vectors
normalvectors and
vectorsand
and
the
the connectivity
theconnectivity of
connectivityofofthe the neighboring
theneighboring voxels.
neighboringvoxels.
voxels.The The extraction
Theextraction process
extractionprocess is repeated
processisisrepeated
repeatedtwice twice with
twicewith voxels
withvoxels
voxelsofof
of
different
different sizes.
differentsizes. The
sizes. The sizes
The sizes are
sizes are set according
are set according to to the
to the length
the length
lengthof of leaves.
ofleaves.
leaves.In In
Inthe the
thefirstfirst screening,
firstscreening,
screening,the the majority
themajority
majorityof
of vegetation
of vegetation points
points areare extracted
extracted but but sparse
sparse vegetation
vegetation points
points are are not
not classified.
classified. Therefore,
Therefore, thethe
second screening, with a larger voxel size, extracts the remaining
second screening, with a larger voxel size, extracts the remaining vegetation points. vegetation points.
Remote Sens. 2017, 9, 215 6 of 16

vegetation points are extracted but sparse vegetation points are not classified. Therefore, the second
screening, with9,a215
Remote Sens. 2017, larger voxel size, extracts the remaining vegetation points. 6 of 16

Voxels (with voxel size of σ11)


1st loop
(1) Exclusion of vertical planar surfaces

Non-planar surfaces Planar surfaces

(2) Shape index & homogeneity

G1 G2 G3

(3) Continuity

Clusters
(4) Multi-scale (σ12) RMSE
& cluster size Vegetation

Vegetation candidates Clusters


Non-vegetation

To voxels in 2nd loop with voxel size of σ21 and σ22

Figure 5. Flowchart
Figure Flowchartofofthe method
the method forfor
extracting point
extracting clouds
point of vegetation.
clouds G1, G2Gand
of vegetation. 1 , GG G3
denote
2 3and
denote
groupsgroups of vegetation,
of vegetation, ambiguous
ambiguous voxels
voxels (vegetation
(vegetation andandnon-vegetation),
non-vegetation),andand non-vegetation,
respectively. σσijij denotes voxel size for the j-th processing
respectively. processing in the i-th
in the i-th loop.
loop.

4.1.1. Vertical
4.1.1. Vertical Planar
Planar Surface
SurfaceExclusion
Exclusion
In urban areas,
In areas,building
buildingwalls
wallsand
androofs account
roofs accountfor for
the the
majority of non-vegetation
majority objects;
of non-vegetation their
objects;
distribution
their characteristics
distribution are planar
characteristics rather
are planar thanthan
rather scattered. AfterAfter
scattered. applying PCAPCA
applying to each voxel,
to each the
voxel,
root mean square error (RMSE) is calculated between points in the voxel
the root mean square error (RMSE) is calculated between points in the voxel and the and the estimated planar
planar
surface. If the RMSE is within a designated threshold
surface. threshold and
and the
the horizontal
horizontal component
component of of the
the normal
normal is is
within another designated threshold, the voxel is regarded as non-vegetation and is excluded
within another designated threshold, the voxel is regarded as non-vegetation and is excluded from the from
the subsequent
subsequent process.
process.

4.1.2.
4.1.2. Voxel
Voxel Classification
Classificationby
by3D
3DDistribution
DistributionCharacteristics
Characteristics
PCA capturethe
PCA can capture thedistribution
distributionfeatures
featuresof of point
point clouds
clouds contained
contained in voxels
in voxels and clusters.
and clusters. A set
A
ofset
3Dofpoints,
3D points,
pi with pi with i =…,
i = 1, 1, . N, N,used
. . , is is usedto to computethree
compute eigenvectors,l1l,1 ,l2l2and
threeeigenvectors, and ll33,, and three
three
eigenvalues,λλ11,, λλ22 and λ3 , with λ11 ≥≥λλ2 2≥ ≥
eigenvalues, λ3λ≥30.≥Normalized
0. Normalized c1, c12 ,and
eigenvalues
eigenvalues c2 and c3 are
c3 are calculated
calculated by
by
thethe
sumsum of all
of all eigenvalues,
eigenvalues, as shown
as shown in Equation
in Equation (2):(2):

λλi i
cci i== (i(i==1,
1, 2,
2, 33)). (2)
33

∑ λλi i
i =1
i =1
(2)
.
If c1 is much larger than the other two, the point cloud has a 1D point distribution. If c3 is much
If c1 is much larger than the other two, the point cloud has a 1D point distribution. If c3 is much
smaller than the other two, the points have a 2D distribution. If all three have similar values, the point
smaller than the other two, the points have a 2D distribution. If all three have similar values, the
cloud has a scattered (3D) distribution.
point cloud has a scattered (3D) distribution.
Voxels are divided into three groups according to their distribution characteristics computed
with points in each voxel. Vegetation tends to have 3D distribution characteristics, hence we used
the slope (ratio) as described in Equation (3) to distinguish vegetation from non-vegetation:

c3 λ3
a= = (3)
c2 λ2 .
Remote Sens. 2017, 9, 215 7 of 16

Voxels are divided into three groups according to their distribution characteristics computed with
points in each voxel. Vegetation tends to have 3D distribution characteristics, hence we used the slope
(ratio) as described in Equation (3) to distinguish vegetation from non-vegetation:
Remote Sens. 2017, 9, 215 7 of 16
c3 λ
a= = 3. (3)
According to slope a, vegetation candidatecvoxels
2 λ2are classified into three groups (Figure 6): G1
is a vegetation group, G2 is composed of ambiguous voxels (vegetation with trimmed surfaces and
According to slope a, vegetation candidate voxels are classified into three groups (Figure 6):
façades tends to be this group), and G3 is the non-vegetation group. Slope aij shows a threshold
G1 is a vegetation group, G2 is composed of ambiguous voxels (vegetation with trimmed surfaces
between two groups: a11 and a12 are used in the first loop, and a21 and a22 are used in the second loop.
and façades tends to be this group), and G3 is the non-vegetation group. Slope aij shows a threshold
Voxels classified to G2 are re-classified into G1 or G3 in the process discussed in Section 4.1.3.
between two groups: a11 and a12 are used in the first loop, and a21 and a22 are used in the second loop.
After the three-group classification, we reduce the false-positive voxels that are misclassified
Voxels classified to G2 are re-classified into G1 or G3 in the process discussed in Section 4.1.3.
into G1. We examine an index that represents the homogeneity of vegetation voxels. The index is
After the three-group classification, we reduce the false-positive voxels that are misclassified into
defined as Equation (4):
G1 . We examine an index that represents the homogeneity of vegetation voxels. The index is defined
as Equation (4): NG
homogeneity =NG1 1 . (4)
homogeneity = N . (4)
all
Nall
whereN
where G1 denotes the
NG1 the number
number of
ofvoxels
voxelsclassified
classifiedinto groupGG
intogroup andNN
1 ,1,and ALL denotes
ALL denotesthe
thesum ofGG11,,GG22
sumof
and G
and G33. When the index is below the designated threshold, the voxel is labeled as G 22 .

Figure 6.
Figure 6. Classification
Classification of
of voxels
voxels with
with the
theshape-based
shape-basedindex
indexdefined
definedbybyEquation
Equation(3).
(3).aai1 and
andaai2 are
are
i1 i2
thresholds in the i-th loop for discriminating G1 from G2 , and G3 from G 2, respectively.
thresholds in the i-th loop for discriminating G1 from G2 , and G3 from G2 , respectively.

4.1.3. Voxel
4.1.3. Voxel Classification
Classificationby
byContinuity
Continuity
Vegetationextraction
Vegetation extractionwithwithslope
slope a alone
a alone is not
is not stable
stable because
because locallocal features
features are sensitive
are sensitive to
to noise.
noise. Therefore,
Therefore, we improve
we improve classification
classification accuracyaccuracy using
using both both
local local
and and contextual
contextual features.
features. First,
First, voxels
voxels are classified into three groups with the local feature as described in Section
are classified into three groups with the local feature as described in Section 4.1.2, and then voxels4.1.2, and then
in
voxels in G 2 are re-classified according to their contextual features. Group G2 contains both
G2 are re-classified according to their contextual features. Group G2 contains both vegetation voxels,
vegetation
such voxels, with
as vegetation such trimmed
as vegetation withand
surfaces, trimmed surfaces, voxels,
non-vegetation and non-vegetation
such as window voxels, such
frames andas
window frames and ridges of roofs. As shown in Figure 7, G 2 voxels are gathered together by
ridges of roofs. As shown in Figure 7, G2 voxels are gathered together by regarding neighboring voxels
regarding
as neighboring
one cluster. voxels
Each cluster as one cluster.
is classified into G1Each
or G3cluster is classified
. This operation caninto G1 or G3. This
be explained by operation
can be explained by
NG1
continuity = N G1 . (5)
continuity = NG1 + N G3 . (5)
N G1 + N G3
where NG1 corresponds to the number of G1 voxels and NG3 represents the number of G3 voxels
where NG1 corresponds
surrounding to the
a target cluster. number is
Continuity of defined
G1 voxels andproportion
as the NG3 represents
of NG1thetonumber
the sumofofGN3 G1voxels
and
surrounding
N G3 . If a
continuitytarget
is cluster.
greater Continuity
than or equalistodefined
a as the
designated proportion
threshold, of
the N to
target
G1 the sum
voxel is of N and
classified
G1 asNGG31 .;
If continuity
otherwise, greater than
it is classified as Gor3 . equal to a designated threshold, the target voxel is classified as G1;
otherwise, it is classified
After vegetation as G3. with a local feature and continuity, G1 still contains noisy voxels
extraction
locatedAfter vegetationand
on windows extraction
ridges ofwith a These
roofs. local feature andbe
voxels can continuity,
regardedGas1 still
noisecontains
becausenoisy voxels
they appear
located on windows and ridges of roofs. These voxels can be regarded as noise because they appear
sparsely and constitute small clusters. However, other vegetation voxels exist that are connected
with other vegetation voxels; for this reason, vegetation voxels tend to form larger clusters. We
focused on this contextual feature to eliminate noisy voxels using the number of voxels. In the
process, G1 voxels are divided into clusters by referring to the connectivity of voxels. If there are
Remote Sens. 2017, 9, 215 8 of 16

sparsely and constitute small clusters. However, other vegetation voxels exist that are connected with
other vegetation voxels; for this reason, vegetation voxels tend to form larger clusters. We focused on
this contextual feature to eliminate noisy voxels using the number of voxels. In the process, G1 voxels
are divided
Remote into9,clusters
Sens. 2017, 215 by referring to the connectivity of voxels. If there are fewer voxels in a cluster 8 of 16
than the designated threshold, the cluster is regarded as noise.
Moreover,we
Moreover, weconsider
considerthe thedistribution
distributioncharacteristic
characteristicofofpoints
pointsininaawhole
wholecluster.
cluster.InInsome
somecases,
cases,
noisyvoxels
noisy voxelsform formlarger
largerclusters
clustersthat
thatcan
canbe beeliminated
eliminatedusing
usingc1c1and
andc3c3computed
computedfrom frompoints
pointsininaa
wholecluster.
whole cluster.This Thisisisbecause
because these
these clusters
clusters areare mainly
mainly onon façades
façades withwith rough
rough surfaces
surfaces or ridges
or on on ridges
of
of roofs.
roofs. If c1Ifisc1greater
is greater than
than a threshold
a threshold or or
c3 cis3 is smaller
smaller thananother
than anotherthreshold,
threshold,the
thecluster
clusterisisregarded
regarded
asasaanoise
noisecluster.
cluster.InInthisthisprocess,
process,clusters
clustersformed
formedwith withanother
anotherthreshold
thresholdare areclassified
classifiedasasvegetation
vegetation
withoutreferring
without referringtoto c1 cand
1 andc3 cin
3 in order
order to to reduce
reduce thethe computational
computational cost.cost.
TheThe thresholds
thresholds givengiven
here here
are
arethrough
set set through experiments
experiments with with sample
sample data.data.

Vegetation
Target voxel
Non-vegetation
No data

Figure7.7.Classification
Figure Classificationof of voxels
voxels based
based on continuity.
on continuity. Target
Target voxels
voxels are changed
are changed into vegetation
into vegetation ones.
ones.
4.2. Green Space Ratio Estimation
4.2. Green Space Ratio Estimation
The methodology for calculating GSR using the classified point clouds is now explained.
A pointThe methodology
cloud generated for calculating GSR
by resampling LiDAR usingdatathe
withclassified
voxels ispoint clouds
classified is two
into nowclasses
explained.
usingA
point cloud generated by resampling LiDAR data with voxels is classified into
the methodology explained in Section 4.1.3. A point cloud is divided into several parts on the x-y two classes using the
methodology
plane because theexplained in Section
whole dataset 4.1.3.toAestimate
needed point cloud
GSR is is divided
too largeinto severalsimultaneously.
to process parts on the x-yAfter
plane
because the whole dataset needed to estimate GSR is too large to process simultaneously.
classifying every point cloud, the point cloud is labeled. If at least one point is labeled as vegetation After
classifying every point cloud, the point cloud is labeled. If at least one point is labeled
in an overlapped area, the target point is also labeled as vegetation. The labeled point cloud is then as vegetation
in an overlapped
stored area,
in voxels, each of the target
which point is also
is classified based labeled as vegetation.
on points The labeled
that it contains point by
as expressed cloud is then
stored in voxels, each of which is classified based on points that it contains as expressed by
Nv
r v = N v, (6)
rv = Nall , (6)
N all
where Nv and Nall represent the number of vegetation points and the number of all points in a voxel,
where Nv and
respectively. Nall
If N all represent
is not larger thethan
number of vegetation
a threshold, the voxelpoints and the
is labeled asnumber
no object. of In
allthe
points
casein a voxel,
that rv is
respectively. If N is not larger than a threshold, the voxel is labeled as
less than another threshold, the voxel is classed as non-vegetation, or else as vegetation. In the voxel
all no object. In the case that rv is
less than
space, another threshold,
a viewpoint of a personthe voxel and
is given is classed
the GSR as non-vegetation,
from it is calculated or else as vegetation.
as explained In the 3.
in Section voxel
space, a viewpoint of a person is given and the GSR from it is calculated as explained in Section 3.
5. Results
5. Results
We conducted experiments using the data explained in Section 2. We set parameter values
required We for the proposed
conducted method using
experiments through theexperiments
data explained within sample
Section data
2. as
Wefollows. In vegetation
set parameter values
extraction, the normal for labeling vertical planar surfaces was defined
required for the proposed method through experiments with sample data as follows. In vegetation to have a zenith angle from
85 ◦ ◦
to 95 . Two
extraction, the sizes
normal of afor
voxel in thevertical
labeling first loop, σ11 and
planar σ12 , were
surfaces set to 0.5tomhave
was defined and a1.0 m, respectively.
zenith angle from
In85°
thetosecond
95°. Two loop,
sizes σ21ofand σ22 were
a voxel in theset to loop,
first 1.0 mσand11 and2.0σ12
m, respectively.
, were set to 0.5Them andvegetation extraction
1.0 m, respectively.
In the
was secondwith
repeated loop, σ21 andthresholds.
different σ22 were setIntothe 1.0first
m andloop,2.0a11m,andrespectively.
a12 were used Theas vegetation
thresholds extraction
related
towas repeated
slope a. In thewith different
second loop,thresholds.
a21 and a22Inwerethe first
used. loop,
We aset
11 and
the athresholds
12 were used as thresholds
through related to
the experiment
slope
with a. In theassecond
samples follows: loop,
a11 a=21 0.02,
and aa2212were
= 0.1,used.
a21 =We setand
0.06, theathresholds
22 = 0.2. The through the experiment
threshold for homogeneity with
samples
and as follows:
continuity was setato 11 = 0.02,
0.55. a12 = 0.1, a21was
Neighboring = 0.06, and as
defined × 5 The
a22a=5 0.2. × 5 voxel
thresholdspace.forInhomogeneity
noise removal,and
continuity
the numberwas set to for
of voxels 0.55. Neighboring
a cluster was defined
to be labeled as noise as was
a 5 ×50 5 ×in5the
voxel
firstspace. In noise
loop and 10 inremoval,
the second the
number of voxels for
loop. Moreover, if c 1 wasa cluster to be labeled 3as noise was 50 in the first loop and 10 in the second
greater than 0.6 or c was smaller than 0.05, the cluster was regarded as a
loop.cluster.
noise Moreover, if c1 was greater than 0.6 or c3 was smaller than 0.05, the cluster was regarded as a
noise cluster.
In GSR estimation, the size of a voxel was set to 0.5 m. As for dividing a point cloud into
sub-regions, the grid size was 20 m and the overlap length between two adjoining grids was 3 m.
The voxel size for storing labeled point clouds was set to 0.5 m. The threshold of Nall for labeling a
voxel as no object was set to 2, and the threshold of rv for labeling as non-vegetation, or else as
Remote Sens. 2017, 9, 215 9 of 16

In GSR estimation, the size of a voxel was set to 0.5 m. As for dividing a point cloud into
sub-regions, the grid size was 20 m and the overlap length between two adjoining grids was 3 m.
The voxel size for storing labeled point clouds was set to 0.5 m. The threshold of Nall for labeling
a voxel asSens.
Remote no 2017,
object was set to 2, and the threshold of rv for labeling as non-vegetation, 9orof else
9, 215 16 as
vegetation, was set to 0.5. In estimating the GSR, the height h of a person was set to 1.5 m.
To show
To show the performance
the performance of vegetation
of vegetation extraction,
extraction, FiguresFigures
8 and 98demonstrate
and 9 demonstrate the
the improvement
improvement of extracting vegetation based on a multi-spatial-scale approach and the effect of the
of extracting vegetation based on a multi-spatial-scale approach and the effect of the voxel sizes set for
voxel sizes set for the experiments. Figures 8b and 9b are the results obtained by applying the
the experiments. Figures 8b and 9b are the results obtained by applying the optimal parameter values.
optimal parameter values.

(a)

(b)

(c)
Figure 8. Improvement of extracting vegetation based on a multi-spatial-scale approach: (a) colored
Figure 8. Improvement of extracting vegetation based on a multi-spatial-scale approach: (a) colored
point cloud; (b) vegetation extracted with σ11 = 0.5 m, σ12 = 1.0 m, σ21 = 1.0 m and σ22 = 2.0 m; and (c)
point cloud; (b) vegetation extracted with σ11 = 0.5 m, σ12 = 1.0 m, σ21 = 1.0 m and σ22 = 2.0 m;
vegetation extracted with σ11 = 1.0 m, σ12 = 2.0 m, σ21 = 2.0 m and σ22 = 4.0 m. (b,c) Red and green
and (c) vegetation extracted with σ11 = 1.0 m, σ12 = 2.0 m, σ21 = 2.0 m and σ22 = 4.0 m.
denote vegetation extracted in the first and second loop, respectively, and blue denotes
(b,c) non-vegetation.
Red and green denote vegetation extracted in the first and second loop, respectively, and blue
denotes non-vegetation.
Remote Sens. 2017, 9, 215 10 of 16
Remote Sens. 2017, 9, 215 10 of 16

(a)

(b)

(c)
Figure
Figure 9.
9. (a–c)
(a–c)Improvement
Improvement of of
extracting vegetation
extracting based
vegetation on aonmulti-spatial-scale
based approach.
a multi-spatial-scale See
approach.
Figure 8 for8 afor
See Figure description of each
a description panel.
of each panel.

We conducted accuracy assessments for vegetation extraction and GSR estimation. The former
We conducted accuracy assessments for vegetation extraction and GSR estimation. The former
were assessed by F-measure, as shown in Equation (7):
were assessed by F-measure, as shown in Equation (7):

2 ⋅ precision
· precision · ⋅recall
recall
− measure= = 2 precision
F −F measure + recall
precision = TP/ ( TP ++FP
precision recall
) (7)
(7)
precision = TP /(TP + FP
recall = TP/( TP + FN ) )
recall = TP /(TP + FN )
Remote Sens. 2017, 9, 215 11 of 16
Remote Sens. 2017, 9, 215 11 of 16

where, TP, TN,


where, TP, TN, FP
FP and
and FNFN denote
denote true
true positive, true negative,
positive, true negative, false
false positive
positive and
and false
false negative,
negative,
respectively. The results are given in Table 1. Figure 10 shows points of the four labels in two
respectively. The results are given in Table 1. Figure 10 shows points of the four labels in two cases:cases:
using original
using original point
point clouds
clouds and
and using
using voxels. Assessment using
voxels. Assessment using the
the original
original points
points may
may bebe biased
biased
because far fewer points were observed around vegetation. Therefore, we assessed the
because far fewer points were observed around vegetation. Therefore, we assessed the results by results by
aggregating the labels of points into those of voxels.
aggregating the labels of points into those of voxels.

(a)

(b)

(c)
Figure 10. Accuracy assessment of vegetation extraction: (a) colored point cloud; (b) verified result
Figure 10. Accuracy assessment of vegetation extraction: (a) colored point cloud; (b) verified result for
for original point cloud; and (c) verified result for 0.2 m voxel. (b,c) Green, blue, pink and yellow
original point cloud; and (c) verified result for 0.2 m voxel. (b,c) Green, blue, pink and yellow denote
denote true positive (TP), true negative (TN), false positive (FP) and false negative (FN),
true positive (TP), true negative (TN), false positive (FP) and false negative (FN), respectively.
respectively.
Remote Sens. 2017, 9, 215 12 of 16

Table 1. Accuracy assessment of GSR estimation.


Remote Sens. 2017, 9, 215 12 of 16

Original
Table 1. Accuracy Point Cloud
assessment of GSR estimation. 0.2 m Voxel
Number of samples 1,993,253
Original Point Cloud 0.2 m Voxel 29,043
True Positive (TP)
Number of samples
371,629
1,993,253 29,043
9154
True Negative (TN)
True Positive (TP) 1,392,444
371,629 9154 16,036
False Positive (FP)
True Negative (TN) 94,644
1,392,444 16,036 1132
False Negative (FN)
False Positive (FP) 134,536
94,644 1132 2721
Precision False Negative (FN) 0.80
134,536 2721 0.89
Recall Precision 0.730.80 0.89 0.77
F-measure Recall 0.760.73 0.77 0.83
F-measure 0.76 0.83

Figures 11 and 12 show comparisons of the ground truth and the occlusion map obtained by
Figures 11 and 12 show comparisons of the ground truth and the occlusion map obtained by
applying the proposed method. Figure 13 illustrates the comparison of the actual GSR and the
applying the proposed method. Figure 13 illustrates the comparison of the actual GSR and the
estimated GSR.GSR.
estimated Finally, thethe
Finally, RMSE ofof
RMSE GSRGSRestimation
estimation was foundtotobebe4.1%
was found 4.1%
forfor
18 18 points.
points.

(a)

(b)

(c)
Figure 11. Comparison of ground truth and occlusion map at P8 (shown in Figure 1): (a) ground
Figure 11. Comparison of ground truth and occlusion map at P8 (shown in Figure 1): (a) ground truth
truth obtained from the images taken by a camera with a fisheye lens; (b) vegetation manually
obtained from the images taken by a camera with a fisheye lens; (b) vegetation manually extracted from
extracted from (a); and (c) occlusion map generated using the proposed method. (b,c) Green denotes
(a); and (c) occlusion
vegetation. (c) Bluemap
and generated using
white denote the proposed
non-vegetation andmethod. (b,c) Green(b)denotes
others, respectively. Actual vegetation.
GSR =
(c) Blue and
17.0%. white denote
(c) Estimated GSR = non-vegetation
20.2%. and others, respectively. (b) Actual GSR = 17.0%.
(c) Estimated GSR = 20.2%.
Remote Sens. 2017, 9, 215 13 of 16
Remote Sens. 2017, 9, 215 13 of 16
Remote Sens. 2017, 9, 215 13 of 16

(a)
(a)

(b)
(b)

(c)
(c)
Figure
Figure 12. (a–c)
(a–c) Comparison of ground truth and occlusion map at
at P11 (shown in
in Figure 1).
1). See
Figure 12.12. (a–c) Comparison
Comparisonofofground
ground truth andand
truth occlusion mapmap
occlusion P11
at (shown
P11 (shown Figure See 1).
in Figure
Figure
Figure 11
11 for
for aa description
description of
of each
each panel.
panel. (b)
(b) Actual
Actual GSR
GSR =
= 7.1%.
7.1%. (c)
(c) Estimated
Estimated GSR
GSR =
= 7.6%.
7.6%.
See Figure 11 for a description of each panel. (b) Actual GSR = 7.1%. (c) Estimated GSR = 7.6%.

Figure
Figure 13. Comparison
13.Comparison
Figure13. of
of actual
Comparison of actual GSR
actual GSRand
GSR andestimated
and estimatedGSR.
estimated GSR.
GSR.
Remote Sens. 2017, 9, 215 14 of 16

6. Discussion
First, we discuss the accuracy of the extracted vegetation and estimated GSR. In vegetation
extraction, Figure 10b shows that the leaves of trees were well extracted, whereas the low box-shaped
hedge was not extracted, shown as false negative (FN) in yellow. Figures 11 and 12 show the actual
view and the occlusion map at P8 and P11, respectively. In the occlusion map, green, blue and white
areas represent vegetation, non-vegetation and no-object areas, respectively. The vegetation area
drawn in the occlusion map of Figure 12c corresponds approximately to the area colored in green in
the actual view of Figure 12b, whereas that of Figure 11c overestimated the vegetation compared to
Figure 11b. As a result, the accuracy of the GSR estimated by the proposed method—an RMSE of
4.1%—was found to be acceptable (Figure 13) considering that the proposed method is designed to
rapidly estimate GSR in wide areas.
The multi-spatial-scale extraction of vegetation implemented in the proposed method functions
properly, as shown in Figures 8 and 9. Vegetation has various types of 3D shape and surface roughness,
and thus the optical spatial scale for extracting vegetation depends on such geometrical features.
We take the approach of extracting vegetation using only geometrical information derived from point
clouds, not using color information. This approach focuses on extracting geometrical information that
reflects vegetation properties that differ from those of non-vegetation. Multi-spatial-scale processing
is revealed to be effective for extracting vegetation. However, it falsely extracted as vegetation walls
whose surfaces were not flat (Figure 8b) and branches without leaves. The latter are difficult to extract
because they have similar geometrical features to vegetation, that is, they can be regarded as randomly
sampled objects.
Next, we focus on the advantage of the proposed method for extracting vegetation and estimating
GSR. As explained in Section 1, some existing approaches use point clouds and images to extract
vegetation [15–17]. The image requires light source and the brightness of vegetation of the images
may depend on the species and measurement season. As a result, the performance of extracting
vegetation is not always stable. The proposed approach needs only point clouds and therefore it can
avoid such unstable extraction of vegetation. In addition, mobile LiDAR data are found to be effective
in estimating small vegetation that airborne LiDAR may have difficulty in extracting. In terms of
estimating GSR, the proposed method can estimate it at any point of the study area. Mobile LiDAR
can cover much larger areas than stationary LiDAR. The GSR estimated from mobile LiDAR data can
represent smaller vegetation than that from airborne LiDAR data.
Then, we address the selection of parameter values. The optimal parameter values for vegetation
extraction may be difficult to determine automatically. Therefore, in this research, we repeated manual
tuning by applying them to the training data sets. For example, in a previous study we found that the
optimal voxel sizes for extracting vegetation from terrestrial LiDAR data were 10 cm and 20 cm [16].
However, for mobile LiDAR data, we set them to 0.5 m and 1.0 m. We examined several different
values, but finally these larger values were found to be optimal. Figures 8 and 9 show that different
parameter values for voxel size failed to extract vegetation. The optimal parameter values for mobile
LiDAR data were found to be different from those for terrestrially fixed LiDAR data. Mobile LiDAR
data covers much larger areas than does terrestrial LiDAR data, and accordingly, the range of point
density per area of mobile LiDAR data changes much more than that of terrestrial LiDAR data.
Finally, we discuss the factors that contribute to the error in the GSR estimated by the proposed
method. First, the estimated GSR tends to be an overestimate. Reconstructed objects become bigger
than the actual objects because the voxel-based approach converts point clouds into 0.5 m or 1.0 m
voxels. Second, the vegetation around θ = 90◦ may cause large errors, especially when the point
of interest has tall trees and the occlusion map has some vegetation areas around θ = 90◦ . Voxels
around 90◦ and −90◦ contribute more to the GSR estimation than does the actual contribution when
the projection shown in Figures 11c and 12c is used for the occlusion map. This contribution can
be reduced using an equisolid angle projection [21]. Finally, the error in extracting vegetation using
point-cloud classification should be resolved. For example, when there is vegetation on a wall or
Remote Sens. 2017, 9, 215 15 of 16

fence, such non-vegetation objects may be falsely extracted as a part of the vegetation. Separating such
non-vegetation objects is one of the most difficult challenges in LiDAR data processing. Improving
vegetation extraction is a key issue for improving GSR estimation.

7. Conclusions
In this paper, we presented a method for estimating GSR in urban areas using only mobile LiDAR
data. The method is composed of vegetation extraction and GSR estimation. Vegetation is extracted
by considering the shape of objects on multiple spatial scales. We applied the method to a residential
area in Kyoto, Japan. The obtained RMSE of approximately 4.1% was found to be acceptable for
rapidly assessing local landscape in a wide area. It was confirmed that mobile LiDAR could extract
vegetation along streets and roads, whereas the estimated GSR tends to be an overestimate because
of the voxel-based approach. The selection of optimal parameter values depends on the study area,
and thus requires manual tuning. However, the proposed method overcame the existing challenge of
the automatic vegetation extraction and contributes to assessment of green space even in a complex
urban area, that will enrich green landscape. Future tasks are automatic selection of optimal parameter
values, and improvement of the accuracy of extracting vegetation points even when there is vegetation
on non-vegetation objects.

Acknowledgments: We express thanks to Amane Kurki, Kyoto University and Takuhiro Wakita for supporting in
situ measurement using LiDAR and the camera, to The Obayashi Foundation for the grant of this research, and to
PASCO Co., Ltd. for providing mobile LiDAR data.
Author Contributions: J.S. conceived and designed the experiments; J.S. and S.K. analyzed the data;
and J.S. wrote the paper.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Panduro, T.E.; Veie, K.L. Classification and valuation of urban green spaces—A hedonic house price
evaluation. Landsc. Urban Plan. 2013, 120, 119–128. [CrossRef]
2. Landscape Plan of Nishinomiya-City, 2011. Available online: http://www.nishi.or.jp/media/2016/
keikankeikaku_nishinomiya_201609.pdf (accessed on 27 February 2017).
3. Parent, J.R.; Volin, J.C.; Civco, D.L. A fully-automated approach to land cover mapping with airborne
LiDAR and high resolution multispectral imagery in a forested suburban landscape. ISPRS J. Photogramm.
Remote Sens. 2015, 104, 18–29. [CrossRef]
4. Tian, Y.; Jim, C.Y.; Wang, H. Assessing the landscape and ecological quality of urban green spaces in a
compact city. Landsc. Urban Plan. 2014, 121, 97–108. [CrossRef]
5. Gupta, K.; Kumar, P.; Pathan, S.K.; Sharma, K.P. Urban neighborhood green index—A measure of green
spaces in urban areas. Landsc. Urban Plan. 2012, 105, 325–335. [CrossRef]
6. Rutzinger, M.; Höfle, B.; Hollaus, M.; Pfeifer, N. Object-based point cloud analysis of full-waveform airborne
laser scanning data for urban vegetation classification. Sensors 2008, 8, 4505–4528. [CrossRef] [PubMed]
7. Elseberg, J.; Borrmann, D.; Nuchter, A. Full wave analysis in 3D laser scans for vegetation detection in urban
environments. In Proceedings of the 2011 XXIII International Symposium on Information, Communication
and Automation Technologies, Sarajevo, Bosnia and Herzegovina, 27–29 October 2011; pp. 1–7.
8. Unnikrishnan, R.; Hebert, M. Multi-scale interest regions from unorganized point clouds. In Proceedings
of the Computer Vision and Pattern Recognition Workshops 2008, Anchorage, AK, USA, 23–28 June 2008;
pp. 1–8.
9. Lim, E.H.; Suter, D. 3D terrestrial LIDAR classifications with super-voxels and multi-scale Conditional
Random Fields. Comput. Aided Des. 2009, 41, 701–710. [CrossRef]
10. Xu, S.; Vosselman, G.; Oude Elberink, S. Multiple-entity based classification of airborne laser scanning data
in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 88, 1–15. [CrossRef]
11. Brodu, N.; Lague, D. 3D terrestrial lidar data classification of complex natural scenes using a multi-scale
dimensionality criterion: Applications in geomorphology. ISPRS J. Photogramm. Remote Sens. 2012, 68,
121–134. [CrossRef]
Remote Sens. 2017, 9, 215 16 of 16

12. Wakita, T.; Susaki, J. Multi-scale based extraction of vegetation from terrestrial LiDAR data for assessing
local landscape. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W4, 263–270. [CrossRef]
13. Susaki, J.; Komiya, Y. Estimation of green ratio index using airborne LiDAR and aerial images. In
Proceedings of the 2014 8th IAPR Workshop on Pattern Recognition in Remote Sensing, Stockholm, Sweden,
24 August 2014; pp. 1–4.
14. Wakita, T.; Susaki, J.; Kuriki, A. Assessment of vegetation landscape index in urban areas from terrestrial
LiDAR Data. In Proceedings of the 36th Asian Conference on Remote Sensing (ACRS), Quezon City,
Philippines, 24–28 October 2015.
15. Huang, Y.; Yu, B.; Zhou, J.; Hu, C.; Tan, W.; Hu, Z.; Wu, J. Toward automatic estimation of urban green
volume using airborne LiDAR data and high resolution Remote Sensing images. Front. Earth Sci. 2013, 7,
43–54. [CrossRef]
16. Yang, J.; Zhao, L.S.; McBride, J.; Gong, P. Can you see green? Assessing the visibility of urban forests in cities.
Landsc. Urban Plan. 2009, 91, 97–104. [CrossRef]
17. Yu, S.; Yu, B.; Song, W.; Wu, B.; Zhou, J.; Huang, Y.; Wu, J.; Zhao, F.; Mao, W. View-based greenery:
A three-dimensional assessment of city buildings’ green visibility using Floor Green View Index.
Landsc. Urban Plan. 2016, 152, 13–26. [CrossRef]
18. Lin, Y.; Holopainen, M.; Kankare, V.; Hyyppa, J. Validation of mobile laser scanning for understory tree
characterization in urban forest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 3167–3173. [CrossRef]
19. Yang, B.; Dong, Z. A shape-based segmentation method for mobile laser scanning point clouds. ISPRS J.
Photogramm. Remote Sens. 2013, 81, 19–30. [CrossRef]
20. Wu, B.; Yu, B.; Yue, W.; Shu, S.; Tan, W.; Hu, C.; Huang, Y.; Wu, J.; Liu, H. A voxel-based method for
automated identification and morphological parameters estimation of individual street trees from mobile
laser scanning data. Remote Sens. 2013, 5, 584–611. [CrossRef]
21. Susaki, J.; Komiya, Y.; Takahashi, K. Calculation of enclosure index for assessing urban landscapes using
digital surface models. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4038–4045. [CrossRef]

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like