You are on page 1of 6

Module 3

Sparsity Technique
3.1

Sparse matrices

Sparse matrix is a matrix in which most (or, at least, significant number) of the elements are zero.
In the context of power system analysis, the matrices associated with power flow solution are sparse.
For example, let us consider the YBU S matrix. As we have already seen, the off-diagonal elements of
YBU S matrix signifies the connectivity between the nodes. To be more precise, the element (i,j) of
YBU S matrix is non-zero if there is a direct connection between node i and node j, while it is zero if
there is no direct connectivity between these two nodes. Now, in many power systems, generally any
bus is connected to mostly 3-4 buses directly. Therefore, in a 100 buses system (say), there would
be at best 4-5 non-zero terms (including the diagonal) in any row of the YBU S matrix, rest of the
elements being zero. Therefore, out of (100 100) = 10, 000 elements, only about 500 terms would
be non-zero and the other terms (elements) would be zero. Thus, in this case, the YBU S matrix is
almost 95 percent sparse. For any larger system, the percentage of sparsity of the associated YBU S
matrix would be even more.
Because of the sparsity of the YBU S matrix, the Jacobian matrix for load flow solution is also
sparse. To see that, please consider equations (2.48) - (2.55). From these equations it can be seen
that all the elements of the Jacobian matrix depend on the element Yij . Therefore, if this element
Yij is zero, the corresponding elements of the Jacobian matrix would also be zero. As most of the
elements (Yij ) of the YBU S matrix are zero, it immediately follows that most of the elements of the
Jacobian matrix would also be zero, thereby making the Jacobian matrix also quite sparse.
Now, in each iteration of the NRLF technique, (we are considering the polar form here), the
correction vector (X) is computed by inverting the Jacobian matrix and thereafter multiplying
the inverse of the Jacobian matrix with the mismatch vector (M ) (please see equation (2.45)).
However, even though the Jacobian matrix is sparse, its inverse is a full matrix. Hence, computation
of the direct inverse of the sparse matrix involves a lot of computational burden. Therefore, it would
be much less intensive if equation (2.45) can be solved exploiting the sparse nature of the Jacobian
matrix. Apart from this, storing all the elements of a highly sparse matrix also consumes the memory
unnecessarily. Therefore, if only the non-zero elements are stored in appropriate fashion, a lot of
memory can be freed. Of course, with the storage of only the non-zero elements, the complexity of
84

programming will increase. However, for any general purpose load flow program, which is expected
to handle any large size power system, enhancement in the complexity of programming is often a
small cost as compared to the advantage of optimized memory utilization.
Below we will discuss some schemes for solving a set of linear equations (note that equation (2.48)
is a set of linear equations) utilizing the sparse nature of the Jacobian matrix and also some schemes
for storing a sparse matrix. We will start with the Gaussian Elimination method for solving a set of
linear equations.

3.2

Gaussian elimination technique

Let us consider a linear system of equations:

Ax = b

(3.1)

Where x and b are both (n1) vectors and A is a (nn) co-efficient matrix. The most obvious
method for solving equation (3.1) is to invert matrix A, that is x = A1 b. However, equation
(3.1) can also be solved indirectly by converting the matrix A into an upper triangular form with
appropriate changes reflected in the vector b and then by back substitution. To illustrate the basic
procedure, let us consider a 4th order system as shown in equations (3.2)-(3.5).

a11 x1 + a12 x2 + a13 x3 + a14 x4 = b1

(3.2)

a21 x1 + a22 x2 + a23 x3 + a24 x4 = b2

(3.3)

a31 x1 + a32 x2 + a33 x3 + a34 x4 = b3

(3.4)

a41 x1 + a42 x2 + a43 x3 + a44 x4 = b4

(3.5)

The Gaussian elimination proceeds in certain sequential steps as described below:


Step 1:
a) Equation (3.2) is divided throughout by a11 .

x1 +

a13
a14
b1
a12
x2 +
x3 +
x4 =
a11
a11
a11
a11

(3.6)

b) Multiply equation (3.6) by a21 , a31 , a41 (one by one) and subtract the resulting expression
from equations (3.3), (3.4) and (3.5) respectively to yield:

(a22

a12 a21
a13 a21
a14 a21
b1 a21
) x2 + (a23
) x3 + (a24
) x4 = b2
a11
a11
a11
a11

(3.7)

(a32

a12 a31
a13 a31
a14 a31
b1 a31
) x2 + (a33
) x3 + (a34
) x4 = b3
a11
a11
a11
a11

(3.8)

85

(a42

a13 a41
a14 a41
b1 a41
a12 a41
) x2 + (a43
) x3 + (a44
) x4 = b4
a11
a11
a11
a11

(3.9)

Equations (3.6) to (3.9) can be written more compactly as,

x1 +

a12
a13
a14
b1
x2 +
x3 +
x4 =
a11
a11
a11
a11

(3.10)

(1)
(1)
(1)
a(1)
22 x2 + a23 x3 + a24 x4 = b2

(3.11)

(1)
(1)
(1)
a(1)
32 x2 + a33 x3 + a34 x4 = b3

(3.12)

(1)
(1)
(1)
a(1)
42 x2 + a43 x3 + a44 x4 = b4

(3.13)

Where, in equations (3.10) - (3.13)

a(1)
jk = ajk

aj1 a1k
a11

for j, k = 2, 3, 4

(3.14)

Step 2: In this step we will work with equations (3.11) - (3.13).


(1)
a) Equation (3.11) is divided throughout by a22 .

x2 +

(1)
a23

(1)
a22

x3 +

a(1)
24

a(1)
22

(1)

x4 =

b2(1)

(3.15)

a(1)
22

(1)

b) Multiplying equation (3.15) by a32 and a42 (one by one) and subtracting the resulting
expressions from equations (3.12) and (3.13) respectively one can obtain;
(1)
33

[a

(1)
43

[a

(1)
a(1)
23 a32
(1)
22

(1)
a(1)
23 a42
(1)
22

(1)
34

] x3 + [a

(1)
44

] x3 + [a

(1) (1)
a24
a32

(1)
22

(1) (1)
a24
a42

(1)
22

] x4 = [b

(1)
3

] x4 = [b

(1)
4

b2(1)

(1)
22

b2(1)

(1)
22

a(1)
32 ]

(3.16)

a(1)
42 ]

(3.17)

Similar to step 1, equations (3.15) - (3.17) are re-written as,

x2 +

(1)
a23
(1)
a22

x3 +

a(1)
24

a(1)
22

x4 =

b2(1)

a(1)
22

(3.18)

(2)
(2)
a33
x3 + a34
x4 = b3(2)

(3.19)

(2)
(2)
a43
x3 + a44
x4 = b4(2)

(3.20)

Where, in equations (3.19) - (3.20),


(2)
jk

(1)
jk

=a

(1) (1)
aj2
a2k
(1)
a22

for j, k = 3, 4

Step 3: In this step we will work with equations (3.19) and (3.20).
86

(3.21)

(2)

a) Equation (3.19) is divided throughout by a33 .

x3 +

(2)
a34

(2)
a33

x4 =

b3(2)

(3.22)

(2)
a33

(2)

b) Multiplying equation (3.22) by a43 and subtracting it from equation (3.20) one can obtain,
(2)
44

[a

(2)
a(2)
34 a43
(2)
33

] x4 = [b

(2)
4

b3(2)

(2)
33

(2)
]
a43

(3.23)

Equation (3.23) contains only one unknown, x4 . Therefore, the value of x4 can be calculated
from this equation. With the value of x4 thus calculated, x3 can be calculated from equation (3.22).
Going back in this manner, x2 can be calculated from equation (3.18) (with the known values of x3
and x4 ) and lastly, the value of x1 can be calculated from equation (3.10) (with the known values
of x2 , x3 and x4 ).
The steps described in equations (3.6)-(3.23) can easily be expressed in terms of standard matrix
operations. To see this, let us represent equations (3.2)-(3.5) in matrix notation as shown in equation
(3.24). In this equation, it is assumed that a11 0.

a11

a
21

a31

a41

a12
a22
a32
a42

a13
a23
a33
a43


a14 x1 b1

a24 x2 b2
=
a34 x3 b3

a44 x4 b4

(3.24)

Starting with this matrix, the various steps for Gaussian elimination are as follows.
Step M1
On equation (3.24), the operation R1/a11 (where R1 is the first row of the co-efficient matrix
of equation (3.24)) is carried out to obtain equation (3.6) and the resulting matrix equation is shown
in equation (3.25).

1 a12 /a11 a13 /a11 a14 /a11 x1 b1 /a11

a
x b
a
a
a
21
2
22
23
24 2

a31

b3
a
a
a
x
32
33
34
3

a41

a
a
a
x
b
42
43
44
4
4

(3.25)

Step M2
On equation (3.25), the operations (R2 R1 a21 ), (R3 R1 a31 ) and (R4 R1 a41 ) are
carried out (where Ri denotes the ith (i = 1, 2, 3, 4) row of the co-efficient matrix of equation (3.25))
to obtain equations (3.10)-(3.13) and the resulting matrix equation is shown in equation (3.26). In
87

(1)

this equation, it is assumed that a22 0.

1 a /a a /a a /a x b /a

12
11
13
11
14
11 1
1 11

(1)
(1)

x2 b(1)
0 a(1)
a
a
2
22
23
24

=
(1)
(1)
(1)

0 a(1)
b
x
a
a
3 3

32
33
34

(1)

(1)
(1)
(1)
0 a42
a43
a44 x4 b4

(3.26)

Step M3
(1)

On equation (3.26), the operation R2/a22 is carried out (corresponding to equation (3.15)) to
obtain the resulting matrix equation shown in equation (3.27).

1 a /a
a13 /a11 a14 /a11 x1 b1 /a11

12
11


(1) (1)

(1)
(1)
(1)
(1)

1
a23 /a22 a24 /a22 x2 b2 /a22

= (1)
(1)
(1)

0 a(1)
x b
a
a

3 3
32
33
34

(1)
(1)
(1)
(1)

0 a42
a43
a44 x4 b4

(3.27)

Step M4
(1)

(1)

On equation (3.27), the operations (R3 R2 a32 ) and (R4 R2 a42 ) are carried out corresponding to the equations (3.18)-(3.21) and the resulting matrix equation is shown in equation
(2)
(3.28). In this equation, it is assumed that a33 0.

1 a /a
a13 /a11 a14 /a11 x1 b1 /a11

12
11

(1) (1)
(1)
(1)
(1)
(1)
0

1
a23 /a22 a24 /a22 x2 b2 /a22

= (2)
(2)
(2)
0

x b
0
a33
a34

3 3

(2)
(2)
(2)
0

0
a
a
x
b
4
43
4
44

(3.28)

Step M5
(2)

On equation (3.28), the operation R3/a33 is carried out to obtain the matrix equation shown
in equation (3.29).

1 a /a
a13 /a11 a14 /a11 x1 b1 /a11

12
11

(1) (1)
(1)
(1)
(1)
(1)
0
1
a23 /a22 a24 /a22 x2 b2 /a22

= (2) (2)
(2)
(2)
0

0
1
a34
/a33

x3 b3 /a33

(2)
(2)
(2)
0

0
a43
a44 x4 b4

(3.29)

Step M6
(2)

Lastly, on equation (3.29), the operation (R4 R3 a43 ) is carried out to obtain the matrix
88

equation shown in equation (3.30).

1 a /a
a13 /a11 a14 /a11 x1 b1 /a11

12
11


(1) (1)

(1)
(1)
(1)
(1)

b
/a
x
1
a
/a
a
/a
22
23
22
24
22 2
2

=
(2)
(2)
b(2) /a(2)
0
x
0
1
a
/a

33
34
33 3

(3)
(3)

0
0
0
a44 x4 b4

(3)

(2)

In equation (3.30), a44 = a44

(2)
a(2)
34 a43
(2)
33

(3)

and b4

= b4(2)

b(2)
3

(2)
33

(3.30)

(2)
a43
. From this equation, the

unknowns can be easily solved by back-substitution starting from the last row of the final co-efficient
matrix in equation (3.30). Thus, Gaussian elimination enables us to solve the unknown quantities
in a systematic manner without inverting the co-efficient matrix. Therefore, by adopting the same
procedure, the correction vector (M ) can be computed from equation (2.48) without having to
invert the Jacobian matrix. When a large power system in analyzed, adopting Gaussian elimination
reduces computational burden to a large extent (as compared to inversion of the Jacobian matrix).
(1)
(2)
In the above procedure, the variables a11 , a22 and a33 have been assumed to be non-zero. These
variables, by which the rows of the co-efficient matrix are divided, are called the pivot variables.
However, during the elimination process, it is not necessary that the pivot variables would be always
non-zero. If any pivot variable turns out to be zero at any intermediate step, then the corresponding
row is interchanged with the next row so that the new pivot variable is non-zero and the elimination
process can continue.
We will look into an example of Gaussian elimination procedure in the next lecture.

89

You might also like