You are on page 1of 6

Proc.

of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,
September 12-14, 2017

A Vision-Based Method for the Detection of


Missing Rail Fasteners
Thanawit Prasongpongchai∗ , Thanarat H. Chalidabhongse† , Sangsan Leelhapantu‡
Department of Computer Engineering
Faculty of Engineering, Chulalongkorn University
Bangkok, Thailand
Email: thanawit.p@student.chula.ac.th∗ , taepras@gmail.com∗ , thanarat.c@chula.ac.th† , sangsan momo@hotmail.com‡

Abstract—Visual inspection of rail fasteners is crucial to systems can be used with cameras installed under automatic
rail safety. However, the traditional method in which railway rail inspection vehicles.
staffs manually inspect the conditions of fasteners is time-
consuming and prone to human error. In this paper, we present
A top-down approach is used in our method to classify
a method to automatically detect missing rail fasteners from the status of the fasteners in each image. The method can
top-view images. Using a top-down approach, coarse bounding be roughly divided into two steps: fastener region estimation
boxes of potential fastener areas are first located from the and missing fastener detection. In the first step, coarse
track and the tie regions with an edge density map and the regions of possible fastener locations are estimated from the
RANSAC algorithm. Preprocessed with the guided filter, the
region within the bounding boxes are then scanned to detect
track and the tie using edge density map. The regions are
rail fasteners using PHOG features and -SVR with RBF located at each side of the intersection between the lines
kernel. The boxes, in which no fasteners are found, are reported that represent the track and tie. The located regions are then
as missing fasteners. The proposed method was tested and has scanned in the next step to determine whether each region
shown a degree of robustness in scenes from complex real- contains a fastener. This is done to detect existing fasteners
world environments with the 100% probability of detection and
3.47% probability of false alarm for missing fastener detection.
using PHOG features and -SVR. Finally, if no fasteners are
The results also indicate that the use of guided filter, RBF detected, a missing fastener report is filed.
kernel and the image pyramid technique for feature extraction The remaining part of this paper is organized into 6
significantly improves the performance of the classifier. sections. In section II, a survey of current methods in
missing rail fastener detection is presented. Next, the image
I. I NTRODUCTION acquisition process and the dataset used in this study are
Rail fastener is a crucial component in rail systems that described in section III. Details of fastener region estimation
fixes rail tracks and ties together. For concrete ties, two types and missing fastener detection are presented in section IV
of rail fasteners: hexagonal-headed bolts, and hook-shaped and section V respectively. In section VI, we show the
fasteners or rail clips, are commonly used [1]. Defects in experimental results from our proposed method as it was
and absence of the component, as shown in Fig. 1b, can tested in real-world environments. We conclude our findings
be the cause of serious injuries such as train derailment. and discuss the future work in section VII.
Thus, constant monitoring of rail fasteners conditions is
essential to ensure safety. However, rail fastener inspection II. R ELATED W ORK
is traditionally done in term of visual inspection by human
i.e. railway staffs walk along the railway to inspect whether Since the early 2000s, a number of vision-based methods
there are any defects in fasteners and other rail components. for rail fastener inspection have been proposed. Stella et al.
Such method of inspection consumes a huge amount of time [2] presented a wavelet transform-based method to recognize
and is prone to human errors. As a result, automatic rail hexagonal-headed rail-fastening bolts and rail clips using a
inspection vehicles are becoming available. 3-layer neural classifier. Singh et al. [3] used edge density
In this paper, we aim to develop an algorithm to detect for detecting missing fasteners from videos. Marino et al. [4]
missing rail fasteners from top-view images of rail ties. Such proposed the VISyR system for detecting hexagonal bolts
in real-time using multilayer perceptron neural classifiers
(MLPMCs). In [5], Xia et al. presented an approach to
detect broken rail fasteners with a method based on Haar-like
features and the Adaboost algorithm by dividing the fastener
region into several parts. Yang et al. [6] employed direction
fields as the feature to detect missing rail clips from images.
In [1], Feng et al. utilized structural topic model to model the
fastener a method for the inspection of rail fasteners. Harris-
Stephen and Shi-Tomasi feature detectors were also used by
Khan et al. [7] to detect missing rail clips in image. In [8],
(a) intact fasteners (b) missing (left) and intact fasten-
ers (right) Gibert et al. proposed a method to assess and classify rail
fasteners using Histogram of Gradient (HOGs) and a group
Fig. 1: Examples of intact and missing rail fasteners. of Support Vector Machines (SVMs) with linear kernel.

978-1-5090-5559-3/17/$31.00 ©2017 IEEE 419


Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,
September 12-14, 2017

Fig. 2: Camera configuration for image acquisition.

Later, they introduced a multitask learning approach for the


problem in [9].
From the literature, it can be seen that the desired
Fig. 3: Variation of visual conditions observed in the dataset.
characteristics of a missing fastener detection system is
A number of fasteners in the dataset are obscured with dirt,
that it should be robust to a wide intra-class variance of
rust, grease, litter, plants, stones, etc.
the appearances of fasteners including illumination changes,
occlusions and different appearances of the fasteners in
terms of texture and color. IV. FASTENER R EGION E STIMATION
A shape-based feature extractor could be used to re-
duce the mentioned intra-class variance. The Histograms of As fasteners are installed on ties on each side of a track,
Oriented Gradients (HOG) feature extractor [10] proposed coarse location of the fastener can be determined from the
by Dalal and Triggs is well-known for its robustness in regions of the track and the tie. This step is performed in
the domain of person detection. The descriptor was later order to 1) minimize the search space for the actual fastener
used in many other shape-based object recognition tasks. In recognition and classification step and 2) reduce the chance
[11], Bosch et al. proposed the use of the image pyramid of false detection of fasteners in the other areas of the image.
technique with HOG and proposed the Pyramid Histograms The proposed method of fastener region estimation is
of Oriented Gradients (PHOG) feature extractor. This new robust to slight rotations and scaling variations, which can
PHOG descriptor improved the object recognition perfor- be expected to occur since different automatic rail inspection
mance of the traditional HOG descriptor. systems may have different camera configurations. These are
the steps to acquire the coarse position of the fasteners:
III. I MAGE ACQUISITION
The images used in this study were taken from several A. Track and Tie Segmentation
locations along the railway of the State Railway of Thailand Since the acquired images were collected from a railway
(SRT) in Bangkok, Thailand, with an 8-megapixel smart- containing ballast, crushed stones that form the trackbed,
phone camera. For each picture, the camera was held at the areas in the image containing ballast are generally more
approximately 60 to 90 centimeters above the rail facing cluttered than the areas with the track and the tie. With this
towards the ground as shown in Fig. 2. Slight rotations in assumption, the image can be segmented to determine the
the images are observed since the camera was held by hand. track and tie regions using edge density.
The resulting images are top-view images of the crossing After converting the input image to grayscale, local nor-
between a track and a tie similar to that shown in Fig. 1. malization and the guided filter [12] is applied to strengthen
The dataset contains only fasteners of e-clip type since it is the edges and reducing the noise as shown in Fig. 4b.
the most common type of fastener in SRT railways. Requiring two inputs: the image to be filtered and the guide,
The images were then annotated using a custom bounding the guided filter computes the output based on both inputs.
box marking tool. This tool was developed for this study Using the input image itself as the guide, the filter can act as
in order to acquire the ground truth of location and size of an edge-preserving blur filter. After that, an edge magnitude
fasteners appearing in the images and their conditions: good, map as illustrated in Fig. 4c is obtained by means of the
missing, partially occluded, or uninspectable. The bounding Sobel filter. Then, box blur is applied to the edge map and
boxes were drawn as tight as possible to the edges of the inverted to create an edge density map. Next, histogram
fasteners in order to ensure the consistency in the dataset. equalization and gamma correction are performed in order
Fastener locations in the image and their conditions were to increase the discrimination between low and high edge
collected to be used as the training data for our classification density areas. Fig. 4d shows the processed edge density map.
model and also for the evaluation of our method. Finally, the processed density map is thresholded with Otsu’s
As shown in Fig. 3, a number of complex scenes are method [13]. From these steps, the track and tie region in
present in the dataset and large variety of visual conditions the image is acquired as a binary mask as shown in Fig. 4e
are observed. From our visual inspection, variations in the
appearance of fasteners in the dataset include illumination B. Fastener Position Estimation
changes, textures, and colors. Partial and full occlusions of A slightly modified version of Random Sample Consensus
fasteners are also present as some fasteners are obscured by algorithm (RANSAC) [14] is performed on the binary mask
plants growing on the railway, litter, or rail ballasts. from the last step to gain the lines that go along the track

420
Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,
September 12-14, 2017

(a) (b) (c)

Fig. 5: Example result from our fastener region estimation


method. The estimated bounding boxes are shown as green
(d) (e)
rectangles. The blue (dashed) and red (dotted) lines are the
computed track and tie lines respectively.
Fig. 4: The result from each step of track and tie segmen-
tation: (a) input image, (b) preprocessed image, (c) edge
magnitude map, (d) edge density, and (e) the resulting mask
of track and tie area.

and the tie. RANSAC is an iterative process that aims to fit


a straight line to the given set of points or pixels. In each
iteration, a pair of points are sampled to form a candidate Fig. 6: Visualization of PHOG features of an image region
line. Every point with the distance to the line not exceeding containing a fastener.
a pre-specified margin is then counted as the score of the
line. In traditional RANSAC, the line with the highest score
is considered the fitting line. as new fasteners are coated with paint, whereas some old
In our method, the margin of RANSAC is set to the ones are covered in dirt, rust, or grease. In order to cope
maximum value of the distance transform map computed with these variations, we first preprocess each input image
from the binary mask which is approximately half the width by converting it to grayscale to eliminate the variation in
of the tie. This helps dealing with the variation in scale color and applying the guided filter [12] to reduce the effect
observed in the dataset. In our implementation of RANSAC, of the variation in texture before proceeding to the next step.
since we need two lines to represent the track and the tie,
we choose the highest ranked sample as the first line and B. Feature Extraction
the next highest ranked sample with angle to the first line Pyramid Histogram of Oriented Gradients (PHOG) [11] is
greater than a thresholding value θ. In this study, we use chosen as the feature vector to describe the image patches.
θ = 80◦ . The line with the higher slope is then regarded as Developed further from the Histogram of Oriented Gradients
the track line and the other as the tie line. With no angle (HOG) [10] feature extractor which is well-known for its
constraints applied to the first line, it can be expected that usage in person detection, PHOG is considered suitable for
the algorithm will be robust to slight rotations. object detection where shape is the main characteristics in
The coarse bounding boxes containing areas where a consideration.
fastener should be installed are determined as the regions
To extract PHOG vectors, multiple HOG features with
next to the intersection between the track line and the tie
different cell sizes are computed from the window in con-
line. The bounding boxes are shifted from the intersection
sideration. The cell width and height are doubled in every
point to each side of the track along the tie line to cover
iteration to form the image pyramid. The resulting vectors
the areas where fasteners are expected to be present. The
are normalized with their magnitudes and concatenated
distance the bounding boxes is shifted depends on the
together to get the PHOG vector of the region. Fig. 6
estimated width of the tie. An example of the results from
visualizes the PHOG features of an image region containing
our fastener region estimation method with the track and tie
a fastener by showing HOG features at each pyramid layer.
lines computed from the last step is presented in Fig. 5
Using edge direction and its spatial information, we can
V. M ISSING FASTENER D ETECTION see that PHOG emphasizes the shape of the object to be
After getting coarse bounding boxes of the areas that are modeled i.e. objects with the same shape have low intra-
expected to contain fasteners, the image within the box is class variance when represented as a PHOG vector. Using
then classified whether a fastener is present. In this step, the pyramid technique helps the algorithm cope better with
refined region of interest for the fastener is also estimated variations in the texture of the object. Moreover, since local
and a missing fastener will eventually be reported if no normalization is performed on the feature vector when HOG
fasteners were detected in each bounding box. descriptors are computed, PHOG is robust to illumination
changes. These are the desired properties of feature vectors
A. Image Preprocessing for this task since there is a considerable variation in the
While rail fasteners of the same type have similar shapes, appearance of fasteners due to different lighting conditions
their appearances vary a lot in term of colors and textures and the surrounding environment.

421
Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,
September 12-14, 2017

Fig. 7: Examples of the cropped images for training our


classifier. Positive samples of fasteners are shown in the top
row, negative samples containing missing fastener regions Fig. 8: SVR score maps at different scales (left) with the re-
are shown in the middle row, and background patches are sulting detection windows drawn on the preprocessed Image
shown in the bottom row. (right), bright area indicates high score at the corresponding
top-left position of the window.

In our implementation, the number of pyramid layers we


used is 3. The window size of the PHOG is 64 by 64 pixels of fasteners. In our implementation, the input image was
with the starting cell size of 8 by 8 pixels, starting block scaled into 6 different sizes with the longer size from 180
size of 16 by 16 pixels, and the number of HOG bins of 9. pixels wide to 80 pixels wide. Each resized image is then
The size of each resulting PHOG feature vector is 2,124. scanned with a 64×64 pixels sliding window with the stride
of 8 pixels. For each window, we extract a PHOG feature
C. Training the Model
vector and compute a regression value from our trained SVR
Epsilon Support Vector Regression (-SVR) [15] with model as a score of that window. The score s of the entire
radial basis function (RBF) kernel is used for classifying input image is calculated as
the input image regions whether it contains any fasteners.
-SVR is a type of Support Vector Machine (SVM) which is s = max SV R(P HOG(w))
used for regression problems. A benefit of using a regression w∈W
model for a binary classification problem is that the thresh- where w is the image area cropped by the sliding window
old value for classification can be adjusted afterwards. This and W is the set of all possible sliding windows within
is beneficial in practical use of the system where users will the image. If the region has a score higher than the user-
be able to adjust the threshold value in inspection sessions. adjustable threshold τ , we conclude that the region contains
To train the model, each image in the dataset is pre- a fastener, otherwise the fastener is missing. The window
processed with the guided filter [12]. Then, a number of wmax that gives the maximum score is considered the area
image patches are cropped from the dataset. For the positive containing the fastener. By this way, a refined bounding box
samples, fastener regions padded with a small margin are of the fastener is also acquired. The score maps at different
cropped from the dataset. The size of the margin is set to scales are illustrated in Fig. 8. The resulting detection
1/8 of the width of the area. All of the sample patches with windows are shown on the right. We can see from the score
a fastener are rotated to acquire a consistent orientation. The map that the score rises significantly at the region containing
negative training set is sampled from 1) areas on the tie with the fastener.
missing fasteners and 2) background patches cropped from
random areas in the parts of the images without fasteners VI. E XPERIMENTAL R ESULTS
e.g. on the track, tie, ballasts, etc. The negative samples
are also flipped horizontally, vertically and rotated by 180 To evaluate the proposed method, it was tested with the
degrees to increase sample size. All of the samples were dataset mentioned in section III. The dataset contains 758
finally resized to 64 by 64 pixels. Examples of the sample images with 1,518 regions of interest expected to contain rail
patches are presented in Fig. 7. fasteners. From these regions, there are 1,450 good fasteners
The parameters we used for the -SVR in our implemen- in total with 170 of which marked as partially occluded.
tation are  = 0.001 and c = 1 and the parameter γ of the There are 24 regions marked as uninspectable since the
RBF kernel is set to 1. clip is fully obscured and 44 missing fasteners (2.95% of
PHOG feature vectors are then extracted from the image inspectable samples) were found in the dataset. All of the
patches. Samples with a fastener and other samples are fasteners in the dataset were of e-clip type. The fastener
labeled as 1 and 0 respectively. The SVR model is finally region estimation method and the missing fastener detection
trained with the resulting feature vectors and their labels. method were evaluated separately.

D. Classification A. Fastener Region Estimation


The image cropped with the bounding box from the region The purpose of this experiment is to evaluate our proposed
estimation step is the input of this step. The input image is fastener region estimation method. Precision, recall, and F1
first scaled into several sizes to enable multi-scale detection score are used to evaluate the performance of the estimation.

422
Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,
September 12-14, 2017

TABLE I: Performance of the proposed fastener region


estimation method.
Precision Recall F1 score Search Area Reduced
91.53% 81.82% 86.40% 71.37%

(a) (b) (c) (d)


Another metrics used in evaluation is the search area reduced
by the region estimation step.
The number of true positives, false negatives, and false
positives, are counted from 1) fastener areas detected by the
method, 2) fasteners that are not detected, and 3) detected
areas without a fastener, respectively. F1 score is defined as
the harmonic mean of recall and precision.
It should be noted that as our region estimation method
is meant to be used for reducing the search space in each
(e) (f) (g) (h)
image for the next missing fastener detection step. A region
estimation result is counted as a true positive when it covers Fig. 9: Example images cropped from the original dataset
more than 95% of the area of the fastener bounding box. for the evaluation of missing fastener detectors. Negative
Table I shows the performance of our method. Our samples (intact fasteners) are shown in (a) to (d) and positive
proposed method results in the precision, recall and F1 score samples (missing fasteners) are shown in (e) to (h).
of 91.53%, 81.82%, and 86.40% respectively while the area
to be searched is reduced by 71.37% on average (fastener
bounding boxes consume on average of 5.33% of the area of
each image). The method took 269 milliseconds per image in
our testing environment of an Intel Core i7-5500U processor
with 8GB DDR3 RAM. From our inspection, the proposed
method works well on images without major occlusions on
the tie area as this condition is against the assumption behind
our method that the tie areas contain less edge density than
the other areas.

B. Missing Fastener Detection


(a) ROC curve (b) Recall, precision and F1 score
In this experiment, the dataset we used to evaluate the
classifier was cropped from each region where fasteners are Fig. 10: Performance metrics of the proposed missing fas-
expected from the original dataset. Examples of intact and tener detector at different threshold values.
missing fasteners in this new dataset are shown in Fig. 9. The
regions are padded with a random margin on every side to
imitate possible results from the fastener region estimation as they show how much human effort is required to correct
step in which the located fastener might not be at the center the misclassified samples.
of the bounding box. As indicated in table III, the proposed method achieved
We have tested our missing fastener detection method the optimal F1 score of 75.47% in missing fastener de-
using 3-fold cross-validation. The dataset is divided into tection. Recall, precision, and F1 scores of the proposed
three parts and three iterations of testing are performed with classifier at different thresholds is also shown in Fig. 10b. It
one part of the dataset as the test set and the others combined can be observed from the table that the use of guided filter
as the training set. The final performance measures are (GF) and image pyramid technique in PHOG improved the
calculated from the measures from each iteration. F1 score by 5.10% and 3.91% respectively. The use of RBF
In our experiment, we calculated the performance mea- kernel also significantly increased the F1 score by 15.01%.
sures for the missing fastener detection problem i.e. images In addition, if the images with partially occluded fasteners
with a missing fastener were considered positive samples are not considered in the evaluation, our classifier achieved
and images with intact fasteners as negative samples. the F1 score of 96.70% in missing fastener detection.
The ROC curve of the proposed missing fastener detection
method is shown in Fig. 10a and the probability of detection TABLE II: Probability of detection and probability of false
(PD) and probability of false alarm (PFA) of the missing alarm of the classifiers for missing fastener detection.
fastener detectors are shown in table II.
Method PD PFA
Precision, recall, and F1 score are also used as the
performance metrics for this missing fastener detection as GF-PHOG-SVR
100.00% 3.47%
(The Proposed Method)
they are considered more suitable for this particular problem
GF-HOG-SVR 97.73% 2.43%
in which the dataset is highly imbalanced. We can see from
PHOG-SVR 100.00% 3.88%
the definition of precision and recall that for this task, they
reflect the performance of the classifier in practical use better GF-PHOG-linear SVR 97.73% 8.80%

423
Proc. of the 2017 IEEE International Conference on Signal and Image Processing Applications (IEEE ICSIPA 2017), Malaysia,
September 12-14, 2017

TABLE III: Missing fastener detection performance of dif-


ferent classification methods.
Method Precision Recall F1
GF-PHOG-SVR
64.52% 90.91% 75.47%
(The Proposed Method)
GF-HOG-SVR 60.00% 88.63% 71.56%
PHOG-SVR 59.38% 86.36% 70.37% (a) (b) (c) (d)
GF-PHOG-linear SVR 61.90% 59.09% 60.47%

However, one disadvantage of using RBF kernel in the


proposed method is that the computation time is high. The
proposed method took on average of 2.7 seconds per image
whereas the SVR classifier with linear kernel only took
on average of 0.86 seconds per image. Fortunately, the
reduction in computation time can be expected in practical (e) (f) (g) (h)
settings of an inspection system in which the position of the
camera will be fixed, causing the search space to be easily
and significantly reduced.
Examples of detection results are shown in Fig. 11. It
can be said that while the classifier is robust to a degree of
occlusions and texture variations, it does not work well in
images with major occlusions and texture irregularities. (i) (j) (k) (l)

VII. C ONCLUSION Fig. 11: Example of the missing fastener detection results
In this paper, we have presented a method for automat- from the proposed classifier. The colored rectangles indicate
ically detecting missing rail fasteners from images. First, the region with the highest score in that image with the
a coarse fastener region estimation method based on edge score denoted on top. The red and green rectangles signify
density map and RANSAC is performed in order to lessen positive (missing fasteners) and negative (intact fasteners)
the search space for the fine detection of fasteners. Then, the classifications respectively. Orange rectangles denote the
missing fastener detection method based on PHOG features ground truth of the fastener bounding boxes. True negatives,
and -SVR are applied to the region obtained from the region true positives, false negatives, and false positives are shown
estimation step. Guided filter is also used to smooth the in (a) to (e), (f) to (h), (i) to (k), and (l) respectively.
texture of rail fasteners before further processing. Our testing
has shown that the proposed classifier is robust to a degree of
[7] R. A. Khan, S. Islam, and R. Biswas, “Automatic detection of
occlusions and variance in texture of the fastener observed defective rail anchors,” in 17th International IEEE Conference on
in complex real-world environment. Intelligent Transportation Systems (ITSC), 2014, pp. 1583–1588.
Future work includes improving the method to better [8] X. Gibert, V. M. Patel, and R. Chellappa, “Robust Fastener Detection
for Autonomous Visual Railway Track Inspection,” in 2015 IEEE
handle scenes with major occlusions and texture anomalies. Winter Conference on Applications of Computer Vision, 2015, pp.
Performance optimization is also needed to make the algo- 694–701.
rithm able to work in real time. [9] X. Gibert, V. M. Patel, and R. Chellappa, “Deep Multitask Learning
for Railway Track Inspection,” IEEE Transactions on Intelligent
R EFERENCES Transportation Systems, vol. 18, no. 1, pp. 153–164, 2017.
[10] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
[1] H. Feng, Z. Jiang, F. Xie, P. Yang, J. Shi, and L. Chen, “Automatic detection,” in 2005 IEEE Computer Society Conference on Computer
Fastener Classification and Defect Detection in Vision-Based Railway Vision and Pattern Recognition (CVPR’05), vol. 1, Jun. 2005, pp.
Inspection Systems,” IEEE Transactions on Instrumentation and 886–893 vol. 1.
Measurement, vol. 63, no. 4, pp. 877–888, Apr. 2014. [11] A. Bosch, A. Zisserman, and X. Munoz, “Representing Shape
[2] E. Stella, P. Mazzeo, M. Nitti, G. Cicirelli, A. Distante, and with a Spatial Pyramid Kernel,” in Proceedings of the 6th ACM
T. D’Orazio, “Visual recognition of missing fastening elements for International Conference on Image and Video Retrieval, ser. CIVR
railroad maintenance,” in Proceedings. The IEEE 5th International ’07. New York, NY, USA: ACM, 2007, pp. 401–408. [Online].
Conference on Intelligent Transportation Systems, 2002, pp. 94–99. Available: http://doi.acm.org/10.1145/1282280.1282340
[3] M. Singh, S. Singh, J. Jaiswal, and J. Hempshall, “Autonomous Rail [12] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans-
Track Inspection using Vision Based System,” in 2006 IEEE In- actions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6,
ternational Conference on Computational Intelligence for Homeland pp. 1397–1409, June 2013.
Security and Personal Safety, 2006, pp. 56–59. [13] N. Otsu, “A threshold selection method from gray-level histograms,”
[4] F. Marino, A. Distante, P. L. Mazzeo, and E. Stella, “A real-time visual IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1,
inspection system for railway maintenance: Automatic hexagonal- pp. 62–66, Jan 1979.
headed bolts detection,” IEEE Transactions on Systems, Man, and [14] M. A. Fischler and R. C. Bolles, “Random sample consensus: A
Cybernetics, Part C (Applications and Reviews), vol. 37, no. 3, pp. paradigm for model fitting with applications to image analysis and
418–428, May 2007. automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395,
[5] Y. Xia, F. Xie, and Z. Jiang, “Broken Railway Fastener Detection Jun. 1981.
Based on Adaboost Algorithm,” in 2010 International Conference on [15] H. Drucker, C. J. C. Burges, L. Kaufman, A. Smola, and V. Vapnik,
Optoelectronics and Image Processing, vol. 1, 2010, pp. 313–316. “Support vector regression machines,” in Proceedings of the 9th
[6] J. Yang, W. Tao, M. Liu, Y. Zhang, H. Zhang, and H. Zhao, “An International Conference on Neural Information Processing Systems,
Efficient Direction Field-Based Method for the Detection of Fasteners ser. NIPS’96. Cambridge, MA, USA: MIT Press, 1996, pp. 155–161.
on High-Speed Railways,” Sensors, vol. 11, no. 8, pp. 7364–7381, Jul.
2011.

424

You might also like