You are on page 1of 5

3.

3 The least Method


As we have indicated in the preceding two section of this chapter, when
redudant measurements exist, an adjustment is necassary so as to get a unique
solution to the problem at hand. Although approximate methods, both graphical
and computational, may be adequate for some limited cases, a more general and
systematic procedur is needed for aplication to all situations. The least squares
adjustment method is such a procedure. Assuming, for the time being, that al
observations are uncorrelated and of equal precision (correlation and unequal
precision will be taken up later), the least squares adjustment method is based
upon the following criterion.
The sum of the squares of the observational residual must be minimum
n

=v 21 + v22 + + v 2n= v 2i minimum


i=l
Thus, in addition to the fact that the adjusted observations must satisfy the
model exactly, the corresponding residual must satisfy the least squares
eriteration of Eq.(3.8). To the demonstrate the application of this criterion, let us
consider the very example of determining a distance X by direct measurements.
Since the model involves only one element (or valiable), it recuares only one
measurement to uniquely determine the distance, i.e.
have two measurements,

l 1=15.12 m and

n0=1 . Supose that we

l 2=15.14 m . then

n=2 , and

according to Eq(3-1), there is one redudancecy because

r=nn 0=21=1
The final value

^x

of the distance can be obtained from the observaions as

follows :

^x =l 1+ v 1=l^1
^x =l 2+v 2=l^2
There are obviously many possible value for
are satisfied. For example, we could have

v 1=+ 0.01 and

v 2=0.01 or

v1

and

v 1=0 and

v 1=+ 0.015 And

v2

that these relations

v 2=0.02

v 2=0.005

or

all of wich would

satisfy the odel as expressed by relatation in Eq.(3-9). Of all thes posibillities,

the least squares solution is the one for which

=( v 1+ v 2)

is a minimum. For

the three possibilities the corresponding sums of the squares of the residuals are

0.022 =4 104 m2
1=0+
0.012 =2 104 m2
+ 0.012 +
1=
0.005 2=2.5 1 04 m2
2
+ 0.015 +
1=
It is clear that

2 is the smallest of the three values, but the ral question is

whather it is very minimum value when all possible combinations of corrections


are considered. To answer this question, and so also demonstrate the criterion of
least squares geomarically, we refer to Fig. 3-3 the two adjustment
measurements

l^1 l^2

, are related to each other by

l^1l^2 =0
Which is easily obtained from Eq.3-9 by subtracting the second line from the
first. Now if we let the abscissa of a two dimentional Cartesian Coordinate
system in Fig 3-3 represent

l^1 and ordinate

depicted by a straight line that is inclined 45


two numerical values of the observation,
define a point

l^2 , then Eq.(3.10) would be


with both axes, as shown. The

l^1 = 15.12m and

which falls above the line because

l^2 = 15.14m ,

l^1< l^2 . The line

representing Eq.(3-10) is called condition line since it represent the condition


that must exist betwen the two adjusted observations

l^1 and l^2 . When this

condition is satisfied, the underlying model is also satisfied.thus, an adjustment


would be carried out if point

is replaced by another point which falls of

condition line. It is obvious that there exist many possibilities.

For as such the point on the line, three of which are indicated by
correspond to the three computed values

1 2 3 ,

A A2

A A2

3. It can be seen that

l^1 l^2

v 1=0 and

A2

is therefore normal to condition line as shown in Fig.3also satisfies the intuitive property that the new

of the observations must deviate as little as possible from the

given observations,
In Fig.3-3, point

A 2 , such that

is the shortest possible (i.e.,minimum). From simple

geometry, the line

estimates

A1

l^1 l^2
is obtained by moving from

v 2=0.02 .

for the point

straight down, thus with

A 2 , the direction of

A A 2 , is

perpendicular to the condition line, thus forming a 45 triangle from which


and

v2

to

respectively. Of all

possibilities, the least squares principle selects the one point,


the distane

A 1 , A 2 , A3 ,

v1

are equal in magnitude but opposite in sign. The adjusted

observations end up being equal,

l^1 = l^2 =15.13m, thus satisfying the

condition expressing the model. In fact, the final estimate of the distance is

^x =15.13

m which satisfies the intuitive feeling that the adjusted value should

be the arithmatic mean of the two given observations, i.e.,

^x =(1 /2)( l^1 +l 2)= (1/2)(15.13+15.14)=15.13m


Indeed , this the simplest case of the very important fact that whenever
measurements of a quantity are uncorrelated and equal precision(weight), the

least squares estimate of the quantity is equal to the arithmetic mean of the
measurements ( see the following section). When only

n0

observations are

obtained, the model will be uniquely determined, as for example measuring a


distance once. If one additional measurement is made, there is one redudancy

(r=1)

and a corresponding equations must be formulated to alleviate the

resulting inconsistency. Thus, in the case of two measurement of a distance just


discussed, Eq.(3-10) must be enforced in order guarentee that the two adjusted
observations end up being equal, thus satisfying the model. Such as equation is
called condition equationor simply condition, since it reflects the condition that
must be satisfied with regard to the given model.
As anothe the example, consider the shape of a plane triangle. Any two
interior angle would uniquely determine its shape, i.e.,
interior angles are measured

n0=2 . If all three

( n=3 ) , the there is a redudancy of one (

r=nn 0=32=1 . For this redudancy, one condition equation needs to be


written to make adjusted observed angles consistent. Such as condition would
reflect the geomatric fact that the sum of the three interior angles in a plane
triangle must equal 180 . Thus, if the three measured angles are

l^1, l^2

l^3

^ ^ ^
, then the condition is l 1+ l 2 + l 3=180 .
The condition equations discussed so far contain both observations and
constants. They are equal in number to redudancy of the problem, r . Thus, for
any adjustment problem, there exist

condition equatins among the

observations. This will lead to one technique of least squares called adjustment
of observations only.
A second least squares technique which is used frequently is called adjustment
of indirect observations. In this technique, the number of conditions is equal to
the total number of observations,
should be

n . Since in terms of observations only there

r conditions, then in this technique the equations must contain

nr=n 0 additional unknown variables. These aditional unknowns are called


parameters . however, unlike an observation which has a value at the outset, a
parameter is an unknown which has no priori value. An example of the conditions
involved in the case of adjustment of indirect observations is given Eqs.(3-9). In
these equetions, ^x

represent the least square estimate of the

requireddistance, the one unknown parameter.

^x is not known beforehand, but

is calculated result of the least square solution. Here, there are two condition
equations

(n=2) , which is the sum of the redudancy

of parameter

(r=1)

and the number

(n0 =1) . It can be seen that if parameter estimate

^x

is

algebraielly eliminated from Eqs.(3-9) there will remain once equation in term of
the observations, Eq.(3-10). In general, all techniques of least square are
equivalent in that they yield indenucal result for the same problem. The reason
for having different techniques is that each class of problem is usually better
handled by one technique than by another.

You might also like