You are on page 1of 14

LEAST SQUARES VERSUS MINIMUM ABSOLUTE

DEVIATIONS ESTIMATION IN LINEAR MODELS

Hoyt G. Wilson, University of British Columbia

ABSTRACT
Previous research has indicated that minimum absolute deviations (MAD)
estimators tend to be more efficient than ordinary least squares (OLS) estimators in the
presence of large disturbances. Via Monte Carlo sampling this study investigates cases
in which disturbances are normally distributed with constant variance except for one or
more outliers whose disturbances are taken from a normal distribution with a much
larger variance. I t is found that MAD estimation retains its advantage over OLS
through a wide range of conditions, including variations in outlier variance, number of
regressors, number of observations, design matrix configuration, and number of
outliers. When no outliers are present. the efficiency of MAD estimators relolive to
OLS exhibits remarkably slight variation.

INTRODUCTION

For a number of reasons, least-squares regression analysis is perhaps the


single most widely used statistical technique by practitioners in industry, govern-
ment, and academia. First, when the assumptions of the model are satisfied, the
parameter estimators have well-known and desirable statistical properties
(especially, they are BLUE). Second, a number of associated statistical tests are
available to aid the analyst in developing and interpreting an empirical model.
Third, as a result of its widespread use, much experience has been gained regard-
ing the use o f the method. Fourth, efficient and easy-to-use computer routines
are widely available. They have rendered the application of least-squares ef-
fortless and inexpensive.
Despite the high regard in which least-squares estimation is held in the
canons of statistical theory, there are situations wherein other criteria are more
appropriate for parameter estimation in simple linear models. In the process of
data-suggested model development, the analyst is rummaging through the
data in an effort to discover relationships. This activity often involves determin-
ing functional forms in addition to screening regressors for inclusion. I n many
applied settings one has little basis for judging a prior; whether the usual least-
squares assumptions on the disturbances are satisfied. An important problem in-
volves the identification and handling of observations that are outliers.
Whatever the sources of the anomalies, it is usually desirable that outliers be iden-
tified and that they not have an unduly large influence on model parameter
estimates. Least-squares estimation falls short on both counts. The nature of the
procedure-minimizing the sum of squared deviations-dictates that very large

322
19781 LINEAR MODELS 323

deviations from the regression hyperplane will be avoided. This means that the fit
of the model to other points will be sacrificed in order to accommodate outliers.
Since the estimation procedure goes to such lengths to avoid large deviations, it is
often difficult to spot true outliers. Further, the damage is compounded because
the inclusion of the outliers can lead to serious errors in the parameter estimates
and real difficulty in recognizing the correct model.
A number of estimation procedures which are more robust to departures
from the usual least squares assumptions have been discussed in the recent
statistical literature [ l ] [2] [6] [lo] [ l l ] [15]. Hogg [9] gives an excellent review of
much of the work that has been done in the area. One method, minimum absolute
deviations (MAD) estimation, stands out as perhaps the most promising for ap-
plied work due to a combination of robustness properties and computational
ease. This article will study the relative performance of MAD vs. ordinary least-
squares (OLS) estimators in the presence of outliers. The effects of variations in
error distributions, design matrices, sample sizes, and model form will be in-
vestigated.

MAD ESTIMATION

Throughout the discussion, the model under consideration will be of the


form

where Y is an n x 1 vector of observations on the regressand, X is an n x p matrix


of values of the p regressors, 0 is a p x 1 vector of parameters, and c is an n x 1
vector of random disturbances. Residuals (deviations from the regression
hyperplane) are defined as
h
e = ( e , , e2, ..., e,)=Y-X@,
A
where B is the estimator of 8.
The Lh estimator of B is the 8 that minimizes

It is to be noted that MAD (h = 1) and OLS (h = 2) are special cases of Lh estima-


tion.
Two possible objections to the choice of OLS over MAD estimation are
(1) the lack of knowledge of the statistical properties of MAD estimators, and
(2) the computational difficulty. Some progress has been made on the first front
as it was recently shown [14] that MAD estimators are unbiased as long as the
distribution of the disturbances is symmetric. It has been known for some time
3 24 DECISION SCIENCES [Vol. 9

that the MAD criterion produces maximum likelihood estimates when distur-
bances follow the double exponential distribution. A great deal of work remains
to be done in developing the sampling theory for this type of estimator. The com-
putational obstacle has been overcome to a large extent. Wagner [l5] first showed
that the MAD estimation problem can be formulated as a linear program. With a
standard simplex algorithm, the computational task can still be considerable since
the number of constraints is equal to the number of observations (n). Several
researchers have investigated ways of reducing the required computations, in-
cluding algorithms that deal with the dual of the LP problem [13]. The best solu-
tion is a routine due to Barrodale and Roberts [4] [5] that employs a specially
modified primal simplex algorithm. With the use of the Barrodale and Roberts
routine, problems of moderate size can be solved at modest cost in terms of com-
puter time. (An indication of computer time requirements will be given later.)
By means of Monte Carlo sampling, earlier studies have provided some in-
formation regarding the behavior of MAD estimators under various conditions.
The work of Forsythe [S], although it did not deal with MAD estimators explicit-
ly, established a pattern for L, estimators as h ranged from 1.25 t o 2.0. Forsythe
found that smaller values of h performed better as the disturbances became more
contaminated (that is, as more of the disturbances were drawn from a normal
distribution with a large variance). In light of this finding, it is reasonable to sup-
pose that L , estimators would d o well with very high contamination rates.
Ashar and Wallace [3] found that for a three-variable model (p = 3) with
autocorrelated normal disturbances and 20 observations, the efficiency of MAD
estimators relative to OLS estimators was about 80 percent.
The work of Blattberg and Sargent 161 is perhaps the most revealing. In their
(artificial) data, disturbances followed members of the class of stable Paretian
distributions which includes the normal and Cauchy as special cases. A minimum
dispersion linear unbiased estimator was derived for this class of distributions, in
which the measure of dispersion is a generalization of variance for the infinite-
variance distributions. It was found that over a wide range of distributions having
infinite variance, MAD estimators outperformed both the minimum dispersion
estimators and OLS estimators. In the normal case, MAD estimators still did not
perform badly; mean absolute deviations for MAD and OLS were .017 and .013,
respectively. Similarly, Kiountouzis [ 121 found that MAD estimators were more
efficient than OLS estimators in the presence of Cauchy and Laplace distribu-
tions. This work offers further confirmation of the general image of MAD
estimators as being robust in the presence of large disturbances, yet pretty
good as compared with OLS when the normality assumption is satisfied.
Brecht [7] investigated cases wherein regressors contained observational er-
rors. He found that in models with a mean observation bias, MAD estimators
outperformed OLS. With a median rather than mean bias, neither method
evidenced a clear advantage.
SAMPLING RESULTS
Although these earlier studies offer some interesting insights, they d o not
give a clear picture of the range of conditions under which MAD estimation is
19781 LINEAR MODELS 325

advantageous. The following Monte Carlo sampling results constitute a more


systematic investigation of the subject. Departures from the usual OLS assump-
tions will be couched in terms of a few points (outliers) with disturbances
taken from a normal distribution having zero mean and a given standard devia-
tion u*. Disturbances associated with ordinary points follow a normal distribu-
tion with standard deviation u, where a < u*. This of course violates the constant
variance (or homoscedasticity) condition. Such a framework should be somewhat
easier for most applied researchers to idenlify with than, say, stable Paretian
distributions with varying dispersion parameters.
The first situation to be investigated involves a model of the form

Y r = P l + P2Xi2+ p 3 x i 3 + t,

in which

Pi= 4.0
p*= 2.0
on= - 1.0.

This will be referred to as the three-variable model. Each of the two non-constant
regressors, Xizand X,,, took on integer values from - 2 to 2 in all combinations,
giving a complete balanced design with n = 25.
For each replication of the experiment, 25 disturbances were generated by a
normal pseudo-random number generator with zero mean. The standard devia-
tion, u, for ordinary disturbances was equal to 1. From the disturbances, values
of the dependent variable were calculated according to (1) and the parameters, P I ,
P2, and P, were estimated by each of three methods: MAD, OLS, and generalized
least squares (GLS). The GLS estimates were based on the certain knowledge of
the standard deviation of the outliers, u*. The selection of the points with which
large disturbances are to be associated is critical, since that selection has a strong
influence on the estimates. If the outliers are associated with extreme values of
the regressors the effect on the estimates will be greater than if they are associated
with middle values. To remove this source of variation, the variances of the
disturbances were permuted randomly after each 10 replications. Each sample
result was based on lo00 replications.
Denote tke sample sttndard deviation of /$ when estimated by OLS and
MAD as &s(fij) and $MAD(flj), respectively. These sample standard deviations are
calculated according to the usual formula,

h
N
(tjk-
/f.)/(N - 1)
I
in which pjk is the kth sample estimate of P,, N is the number of sample estimates
(replications), and
326 DECISION SCIENCES [Vol. 9

An obvious alternative would be to base the estimates on deviations from the true
population means, a,,rather than from the sample means. In this study, however,
the differences b,-& were so slight that applying the alternative method of
measuring deziations always produced standard deviation estimates within 0.1
percent of a(@,).Thus the choice of one method over the other had no effect on
the reported results.
For each of the p parameters and each estimation metiod, the sample stan-
dard deviation is cakulated from loo0 parameter estimates P,. A composite stan-
dard deviation for Y is calculated as

in which
A h
Y,,=the krh sample value of Y,

P
n N
and Y= - c C Yi,.

n
Thus the standard deviation of Y for each estimation method is based on lOOOn
( = 25,000 in this case) estimates of Yi. The relative efficiency of the MAD
estimator of 0,relative to the OLS estimator is then given by

and similarly for 0. Since both estimation methods give unbiased estimators,
relative efficiency offers a good basis for comparing the methods. Values greater
than 1.0 indicate that MAD is outperforming OLS-at least in this relative
variance sense.
Table 1 shows the (estimated) efficiency of MAD estimators relative to OLS
estimators as the number of outliers varies for the case with u* = 10. As the
19781 LINEAR MODELS 327

TABLE 1
Relative Efficiencies of MAD Estimators
for a Three-Variable Model with
25 Observations and 6 = 10

Number of
Outliers PI $2 $3 ?

0 0.81 0.82 0.81 0.82


1 1.69 1.73 1.75 1.72
2 2.26 2.21 2.36 2.27
3 2.66 2.50 2.64 2.60
4 2.64 2.57 2.86 2.69
5 2.88 2.77 2.96 2.87
7 2.89 2.92 2.92 2.91
10 2.61 2.70 2.51 2.60
13 2.10 2.03 1.95 2.02
16 1.57 1S 3 1.56 1.55
19 1.20 1.19 1.08 1.15
21 0.98 0.98 0.94 0.97
22 0.92 0.95 0.95 0.94
23 0.88 0.89 0.88 0.88
24 0.85 0.84 0.83 0.84
25 0.79 0.81 0.80 0.80

number of outliers becomes a large fraction of the number of observations, n, it


no longer makes sense to speak in terms of outliers. For example, in Table 1 the
situation with 23 outliers should be thought of as entailing 23 ordinary points
and two points with vary small disturbances.
It is seen from Table 1 that MAD estimation is clearly superior to OLS as the
number of outliers ranges between one and nineteen. When the homoscedasticity
assumption is satisfied, the efficiency of MAD estimators is about 80 percent. As
soon as a single outlier is introduced, however, the advantage switches
dramatically away from OLS. With more than nineteen outliers (i.e., a few
inliers) MAD again loses its advantage. The technique is not able to make use
of a few especially good points to the same degree that it effectively ignores a few
bad points.
Table 2 gives actual values of sample standard deviations of 0 for the three
estimation methods. The standard deliations of 0 were chosen for this com-
parison since the relative efficiency of Y tends to be representative of the relative
efficiencies of the 8. Figure 1 depicts graphically the data listed in Table 2. It is
328 DECISION SCIENCES [Vol. 9

seen that when the variances of the disturbances are known, GLS consistently
gives the best estimates. With a moderate number of outliers, however, M A D
estimates d o not fare badly relative to GLS. Hence, in that preponderance of real-
world situations wherein variances of disturbances are not known, MAD estima-
tion looks to be a good choice.

TABLE 2
Sample Standard Deviations of Ofor Three
Estimation Methods in a Three-Variable Model
with 25 Observations and a* = 10

Number of
Outliers OLS GLS MAD

0 0.35 0.35 0.43


1 0.77 0.35 0.45
2 1.07 0.36 0.47
3 1.29 0.37 0.50
4 I .42 0.38 0.53
5 1.59 0.39 0.56
I 1.85 0.42 0.64
10 2.20 0.46 0.84
13 2.49 0.54 1.23
16 2.77 0.60 1.79
19 3.03 0.91 2.64
21 3.14 1.34 3.25
22 3.28 1.70 3.49
23 3.31 2.29 3.76
24 3.33 2.87 3.97
25 3.50 3.50 4.38

Tables 1 and 2 both correspond to a three-variable model with 25 observa-


tions and u* = 10. Variations from this setup along at least three dimensions
should be of interest: ( I ) different standard deviations, u * , associated with the
outliers; (2) different numbers of observations, n; and (3) different numbers of
regressors, p. Results of varying u* are given in Table 3. As should be expected,
with a moderate number of outliers, the advantage of MAD over OLS estimators
increases as u* increases. For example, with two outliers and u* =25, the relative
efficiency of M A D is over 5 0 0 percent. Even with u* = 5 , however, the advantage
of M A D estimation is clear.
19781 LINEAR MODELS 329

FlGU.JRE 1
Sample Standard Deviations of Y for Three Estimation Methods

'G-
0' 5 10 15 20 25
0
Number of outliers

TABLE 3 h
Relative Efficiencies of MAD Estimator, Y,
for a Three-Variable Model with
25 Observations and Varied a*

Number of
Outliers dc=5 u*= 10 a*=25

0 0.79 0.82 0.81


1 1.05 1.72 3.95
2 1.28 2.27 5.14
3 1.36 2.60 5.93
4 1.48 2.69 6.56
5 1.56 2.87 6.82
7 1.66 2.91 6.89
10 1.59 2.60 5.74
13 1.36 2.02 2.69
16 1.22 1.55 1.81
19 1.03 1.15 1.21
21 0.94 0.97 1.01
22 0.90 0.94 0.96
23 0.85 0.88 0.92
24 0.83 0.84 0.84
25 0.80 0.80 0.82
330 DECISION SCIENCES [Vol. 9

The next parameter to be varied was n, the number of observations. In


each case, the complete balanced design was retained. For n = 16, X,, and X,,
took values of - 3, - 1, 1, and 3. For n =49, these two regressors took all integer
values from - 3 to 3. Table 4 gives relative efficiences for up to seven outliers.
The table shows that in the presence of a small number of outliers, the relative ad-
vantage of MAD estimation becomes greater as the number of data points
decreases. This result is intuitively reasonable; one or two outliers do not do such
serious damage to OLS estimation .when a large number of good data points are
available.
In studying models with different numbers of regressors, the complete
balanced X matrix could not be retained without changing the number of obser-
vations, n. The course chosen was to keep n fixed at 25 and to assign values to the
regressors independently and randomly from a uniform distribution on the inter-
val ( - 2, 2). It is natural to question whether this change in design has any serious
effect on the relative efficiences of the estimators. Table 5 compares efficiencies
for the three-variable model with a balanced design (as before) and with a ran-
dom design. Randomization of the design does not appear to have any important
influence on the efficiencies.
Table 6 gives results for models with two, three, four, and five independent
variables. True values of the model parameters were as follows:

PI = 4
P2= 2
p,= - 1
P 4 = 4
& = -3.

TABLE 4 h
Relative Efficiencies of MAD Estimator, Y ,
for a Three-Variable Model with u* = 10
and a Varied Number of Observations

Number of
Outliers n = 16 n = 2 5 n=49

0 0.78 0.82 0.79


1 1.97 1.72 1.45
2 2.52 2.27 1.72
3 2.74 2.60 1.99
4 2.61 2.69 2.18
5 2.67 2.87 2.48
7 1.84 2.91 2.69
19781 LINEAR MODELS 33 1

TABLE 5 n
Relative Efficiencies of MAD Estimator, Y,
for a Three-Variable Model with 25 Observations
and 6 = 10, Comparing Balanced
and Random Designs

Number of Random Balanced


Outliers Design Design

0 0.82 0.82
1 I .67 1.72
2 2.09 2.27
3 2.54 2.60
4 2.72 2.69
5 2.75 2.87
7 2.85 2.91
10 2.65 2.60

TABLE 6 n
Relative Efficiencies of MAD Estimator, Y,
for 25 Observations, (I* = 10, and
a Varied Number of Regressors

Number of
Outliers p=2 p=3 p=4 p=5

0 0.78 0.82 0.82 0.81


1 1.74 1.67 1.70 1.70
2 2.15 2.09 2.12 2.10
3 2.41 2.54 2.45 2.45
4 2.79 2.72 2.74 2.63
5 2.95 2.75 2.77 2.69
7 3.05 2.85 2.79 2.68
10 2.98 2.65 2.37 2.11
332 DECISION SCIENCES [Vol. 9

Comparison of the four columns in the table reveals strikingly little variation aris-
ing from the number of independent variables in the model, especially in the
presence of a small number of outliers. This result demonstrates further the wide
range of circumstances under which MAD estimation can be useful.
All of the sampling results presented up to this point deal with sets of obser-
vations containing a mixture of good data points and outliers, but nothing
in between the two extremes. It is of interest to investigate a somewhat more
general situation in which variances of disturbances lie along a continuum rather
than taking only two specific values. For this purpose, standard deviations of
disturbances were drawn from a uniform distribution and data points were
generated as before, permuting the standard deviations randomly after every ten
trials. Table 7 shows relative efficiencies of MAD estimators for various ranges of
standard deviations. Although MAD estimation is still more efficient for large
ranges of u, it does not achieve the great advantage over OLS that was observed
in the cases involving clear-cut outliers. The MAD estimation technique is ap-
parently very good at identifying bad data points in black-and-white situations,
but less successful at distinguishing shades of gray.

TABLE 7
Relative Efficiencies of M A D Estimator, 9,
for a Three-Variable Model with 25 Observations
and Standard Deviations Distributed Uniformly
Over Various lntervals

Range of
Standard Deviations Relative Efficiency

1.0- 5.0 0.94


1.0- 10.0 1 .OO
1 .O- 25.0 1.11
1.0- 50.0 1.14
1.0-100.0 1.15
0.5- 5.0 1.21
0.1- 1.0 1.13
0.1- 2.0 1.11
0.1- 5.0 1.41

lnspection of Tables I , 3, 4. 5 , and 6 suggests an interesting phenomenon:


when the OLS assumptions are satisfied, the relative efficiency of MAD
estimators appears to remain nearly constarit at about 80 percent regardless of the
values of n, p, and u. Table 8 gives the results of a more detailed investigation of
1978) LINEAR MODELS 333

this matter. The variations in the table are remarkably slight in view of the rather
broad range of circumstances unAer which the parameters were estimated. This
observation applies not only to Y, but to each of the regression coefficients in
each model. Each of the values in Table 8 is based on at least lo00 replications. A
few of the values were estimated from as many as 9000 replications in order to
establish with high confidence that all of the efficiencies are not equal.

TABLE 8
Relative Efficiencies of MAD Estimators
when OLS Assumptions are Satisfied

2 16 1.o .804 .807 .806


2 16 10.0 .820 .811 .813
2 25 1.o .789 .793 .791
3 16 0.5 .791 .812 .790 .797
3 16 1.o .807 .791 .796 .798
3 16 10.0 .827 .799 .822 .815
3 25 0.5 .797 .798 .806 .800
3 25 1.o .807 ,806 304 .806
3 25 10.0 .792 .808 .798 .800
3 49 1 .o .791 .792 .795 .792
3 49 10.0 .804 .802 .798 .802
4 25 1.o .806 .805 .799 .812 .805
5 25 1 .o .815 .803 .812 .812 .822 .810

Computations for this work were carried out on an IBM 370-168 computer.
Calculation of lo00 sets of MAD estimates for the three-variable model required
5.93 seconds of CPU time with n = 2 5 and 13.28 seconds with n = 4 9 .

CONCLUSIONS

The results presented here have reinforced a general impression left by earlier
studies: MAD estimators are not terribly inefficient relative to OLS estimators
when the OLS assumptions are satisfied, but are dramatically more efficient in
many situations where large disturbances are present. I t was found that regardless
of the number of regressors, the number of observations, the standard deviation
of the disturbances, or the configuration of the design matrix, when the OLS
assumptions (including normality) were satisfied, the efficiency of MAD
estimators relative to OLS estimators varied only a few percentage points from 80
334 DECISION SCIENCES [Vol. 9

percent. The lowest estimated efficiency for any MAD estimator was about 79
percent. If this observation can be generalized, it will serve to place an upper
bound on the loss in efficiency that may result from selecting MAD rather than
OLS estimation. The matter warrants further investigation.
Sampling results such as those reported here cannot be extrapolated to other
situations with certainty. For example, outliers may result from some mechanism
other than the one employed in this study, disturbances may follow different
distributions, and the form of the model may vary beyond the range of the cases
investigated. In such uncharted territory there is little basis for making quan-
titative estimates of relative efficiencies. Based on the available evidence-from
past studies as well as from this one-it is reasonable to suppose that the same
general pattern will prevail over a wide range of circumstances; that is, MAD
estimators will tend to be more efficient than OLS estimators in the presence of a
few large
- disturbances.
Other specific findings of the sampling study were as follows:
1. If at least one outlier is present, MAD estimation enjoys a significant advan-
tage in efficiency over OLS estimation. This statement holds even when more
than half of the data points are classified as outliers and over a wide range of
ratios of outlier variance to normal variance.
2. As the ratio of outlier variance to normal variance increases, the efficiency
of MAD estimators (relative to OLS) increases.
3. The relative efficiency of MAD estimators increases as the number of obser-
vations (n) decreases.
4. Ceterisparibus, the number of independent variables in a model has little in-
fluence on the relative efficiency of the MAD estimator of the dependent
variable.
5. The relative advantage of MAD estimation is not so great when variances of
disturbances lie along a continuum rather than taking only two values (cor-
responding to ordinary points and outliers).

REFERENCES

111 Adichi, J. N. Estimates of Regression Parameters Based o n Rank Tests. Annals of


Mathematical Statistics, Vol. 38 (1967). pp. 894-904.
I21 Andrews, D. F. A Robust Method for Multiple Linear Regression. Technometrics, Vol. 16
(1974), pp. 523-531.
I31 Ashar, V. G., and T. D. Wallace. A Sampling Study of Minimum Absolute Deviations
Estimators. Operations Research, Vol. 1 1 (1963), pp. 747-758.
I41 Barrodale, I., and F. D. K. Roberts. An improved Algorithm for Discrete L, Linear Approx-
imation. H A M Journal of Numerical Analysis, Vol. 10 (1973). pp. 839-848.
151 Barrodale, I., and F. D. K. Roberts. Solution of an Overdetermined System of Equations in
the L , Norm. Communications of the ACM, Vol. 17 (1974). pp. 319-320.
I61 Blattberg, R., and T. Sargent. Regression with Non-Gaussian Stable Disturbances: Some
Sampling Results. Econometrica, Vol. 39 (1971), pp. 501-510.
171 Brecht, H. David. Regression Methodology with Gross Observation Errors in the Ex-
planatory Variables. Decision Sciences, Vol. 7 (1976). pp. 57-65.
19781 L INEA R MODELS 335

Forsythe, Alan B. Robust Estimation of Straight Line Regression Coefficients by Minimizing


pth Power Deviations. Technomefrics, Vol. 14 (1972), pp. 159-166.
Hogg, Robert V. Adaptive Robust Procedures: A Partial Review and Some Suggestions for
Future Applications and Theory. Journal of the American Statistical Association, Vol. 69
(1974), pp. 909-923.
Huber, Peter J. Robust Regression: Asymptotics, Conjectures, and Monte Carlo. Annals
of Statistics, Vol. 1 (1973), pp. 799-821.
Jureckova, Jana. Nonparametric Estimate of Regression Coefficients. Annals of
Mathematical Statistics, Vol. 42 (1971), pp. 1328-1338.
Kiountouzis, E. A. Linear Programming Techniques in Regression Analysis. Applied
Statistics, Vol. 22 (1973), pp. 69-73.
Robers, P. D., and A. Ben-Israel. An lnterval Programming Algorithm for Discrete Linear
L, Approximation Problems. Journal of Approximation Theory, Vol. 2 (1969), pp. 323-336.
Taylor, Lester D. Estimation by Minimizing the Sum of Absolute Errors. Frontiers in
Econometrics. Edited by Paul Zarembka. New York: Academic Press, 1974. Pp. 169-190.
Wagner, Harvey M. Linear Programming Techniques for Regression Analysis. Journal of
the American Statistical Association, Vol. 54 (1959), pp. 206-212.

You might also like