Professional Documents
Culture Documents
A linear system in three variables determines a collection of planes. The intersection point is the solution. In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example,
is a system of three equations in the three variables . A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by
since it makes all three equations valid.[1] In mathematics, the theory of linear systems is a branch of linear algebra, a subject which is fundamental to modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and such methods play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear
equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
Contents
[hide]
1 Elementary example 2 General form o 2.1 Vector equation o 2.2 Matrix equation 3 Solution set o 3.1 Geometric interpretation o 3.2 General behavior 4 Properties o 4.1 Independence o 4.2 Consistency o 4.3 Equivalence 5 Solving a linear system o 5.1 Describing the solution o 5.2 Elimination of variables o 5.3 Row reduction o 5.4 Cramer's rule o 5.5 Other methods 6 Homogeneous systems o 6.1 Solution set o 6.2 Relation to nonhomogeneous systems 7 See also 8 Notes 9 References o 9.1 Textbooks 10 External links
One method for solving such a system is as follows. First, solve the top equation for x in terms of y:
This results in a single equation involving only the variable y. Solving gives y = 1, and substituting this back into the equation for x yields x = 3/2. This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.)
Here and
Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.
This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their span, and the equations have a solution just when the right-hand
vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed.
where A is an mn matrix, x is a column vector with n entries, and b is a column vector with m entries.
The number of vectors in a basis for the span is now expressed as the rank of the matrix.
The solution set for the equations x y = 1 and 3x + y = 9 is the single point (2, 3). A solution of a linear system is an assignment of values to the variables x1, x2, ..., xn such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways:
1. The system has infinitely many solutions. 2. The system has a single unique solution. 3. The system has no solution.
The solution set for two equations in three variables is usually a line. In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns: 1. Usually, a system with fewer equations than unknowns has infinitely many solutions. Such a system is also known as an underdetermined system. 2. Usually, a system with the same number of equations and unknowns has a single unique solution. 3. Usually, a system with more equations than unknowns has no solution. In the first case, the dimension of the solution set is usually equal to n m, where n is the number of variables and m is the number of equations.
The following pictures illustrate this trichotomy in the case of two variables:
One Equation
Two Equations
Three Equations
The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. Keep in mind that the pictures above show only the most common case. It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). In general, a system of linear equations may behave differently than expected if the equations are linearly dependent, or if two or more of the equations are inconsistent.
[edit] Properties
[edit] Independence
The equations of a linear system are independent if none of the equations can be derived algebraically from the others when the number of equations is equal to the number of variables in each equation. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence.
are not independent- they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations
are independent. The third equation is the sum of the other two, however we are left with two equations that can't be derived algebraically from one another. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.
[edit] Consistency
The equations 3x + 2y = 6 and 3x + 2y = 12 are inconsistent. The equations of a linear system are consistent if they possess a common solution, and inconsistent otherwise. When the equations are inconsistent, it is possible to derive a contradiction from the equations, such as the statement that 0 = 1. For example, the equations
are inconsistent. In attempting to find a solution, we tacitly assume that there is a solution; that is, we assume that the value of x in the first equation must be the same as the value of x in the second equation (the same is assumed to simultaneously be true for the value of y in both equations). Applying the substitution property (for 3x+2y) yields the equation 6 = 12, which is a false statement. This therefore contradicts our assumption that the system had a solution and we conclude that our assumption was false; that is, the system in fact has no solution. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of the equations are consistent together. For example, the equations
are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Note that any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.
[edit] Equivalence
Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice-versa. Equivalent systems convey precisely the same information about the values of the variables. In particular, two linear systems are equivalent if and only if they have the same solution set.
The solution set to this system can be described by the following equations:
Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:
Solving the first equation for x gives x = 5 + 2z 3y, and plugging this into the second and third equation yields
Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the second equation yields z = 2. We now have:
Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and y = 8 into the first equation yields x = 15. Therefore, the solution set is the single point (x, y, z) = (15, 8, 2).
This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss-Jordan elimination. The following computation shows Gauss-Jordan elimination applied to the matrix above:
The last matrix is in reduced row echelon form, and represents the system x = 15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.
is given by
For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.
Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision.
where A is an m n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries.
Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as
Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A.
Cramers Rule
Determinants can be used to solve a linear system of equations using Cramers Rule. Cramers Rule for Two Equations in Two Variables
where When solving a system of equations using Cramers Rule, remember the following: 1. Three different determinants are used to find x and y. The determinants in the denominators are identical. 2. The elements of D, the determinant in the denominator, are the coefficients of the variables in the system; coefficients of x in the first column and coefficients of y in the second column.
3.
, the determinant in the numerator of x, is obtained by replacing the x-coefficients, , in D with the constants from the right sides of the equations, .
4.
, the determinant in the numerator for y, is obtained by replacing the y-coefficients, , in D with the constants from the right side of the equation, .
Example. Use Cramers Rule to solve the system: 5x 4y = 2 6x 5y = 1 Solution. We begin by setting up and evaluating the three determinants :
From Cramers Rule, we have The solution is (6,7). Cramers Rule does not apply if D = 0. When D = 0, the system is either inconsistent or dependent. Another method must be used to solve it. Example. Solve the system: 3x + 6y = -1 2x + 4y = 3 Solution. We begin by finding D:
Since D = 0, Cramers Rule does not apply. We will use elimination to solve the system.
3x + 6y = -1 2x + 4y = 3 2(3x + 6y) = 2(-1) -3(2x + 4y) = -3(3) 6x + 12y = -2 -6x 12y = -9 0 = -11 Add the equations Multiply both sides of equation 1 by 2 and both sides of equation 2 by 3 to eliminate x Simplify
The false statement, 0 = -11, indicates that the system is inconsistent and has no solution. Cramers Rule can be generalized to systems of linear equations with more than two variables. Suppose we are given a system with the determinant of the coefficient matrix D. Let denote the determinant of the matrix obtained by replacing the column containing the coefficients of "n" with the constants from the right sides of the equations. Then we have the following result: If a linear system of equations with variables x, y, z, . . . has a unique solution given by the formulas Example. Use Cramers Rule to solve the system: 4x - y + z = -5 2x + 2y + 3z = 10 5x 2y + 6z = 1 Solution. We begin by setting up four determinants: D consists of the coefficients of x, y, and z from the three equations :
is obtained by replacing the x-coefficients in the first column of D with the constants from the right sides of the equations.
is obtained by replacing the y-coefficients in the second column of D with the constants from the right sides of the equations
is obtained by replacing the z-coefficients in the third column of D with the constants from the right sides of the equations
= 4(60 3) + 5(12 15) + 1(2 50) = 4(57) + 5(-3) + 1(-48) = 228 - 15 48 = 165
= 4(2 (-20)) + 1(2 50) 5(-4 10) = 4(22) + 1(-48) 5(-14) = 88 48 + 70 = 110 Substitute these four values into the formula from Cramers Rule:
...you can switch the rows around to put the matrix into a nicer row arrangement, like this:
The second operation is row multiplication. For instance, given the following matrix:
This row multiplication is often indicated by using an arrow with multiplication listed on top of it, like this:
The "1R1" indicates the actual operation. The "1" says that we multiplied by negative one; the "R1" says that we were working with the first row. Note that the second and third rows were copied down, unchanged, into the second matrix. The multiplication only applied to the first row, so the entries for the other two rows were just carried along unchanged. You can multiply by anything you like. For instance, to get a leading matrix, you can multiply the third row by a negative one-half:
Since you weren't doing anything with the first and second rows, those entries were just copied over unchanged into the new matrix. You can do more than one row multiplication within the same step, so you could have done the two above steps in just one step, like this:
It is a good idea to use some form of notation (such as the arrows and subscripts above) so you can keep track of your work. Matrices are very messy, especially if you're doing them by hand, and notes can make it easier to check your work later. It'll also impress your teacher.
The last row operation is row addition. Row addition is similar to the "addition" method for solving systems of linear equations. Suppose you have the following system of equations:
x + 3y = 1 x + y = 3
You could start solving this system by adding down the columns to get
4y = 4:
You can do something similar with matrices.For instance, given the following matrix:
...you can "reduce" (get more leading zeroes in) the second row by adding the first row to it (the general goal with matrices at this stage being to get a "1" or "0's" and then a "1" at the beginning of each matrix row). When you were reducing the two-equation linear system by adding, you drew an "equals" bar across the bottom and added down. When you are using addition on a matrix, you'll need to grab some scratch paper, because you don't want to try to do the work inside the matrix. So add the two rows on your scratch paper:
This is your new second row; you will write it in place of the old second row. The result will look like this:
In this case, the "R1 + R2" on the arrow means "I added row one to row two, and this is the result I got". Since row one didn't actually change, and since we didn't do anything with row three, these rows get copied into the new matrix unchanged.
The "2R1 + R2" means "I multiplied row one by 2, and then added the result to row two". Note that row one itself is actually unchanged, so it is copied over to the new matrix. Only row two was actually adjusted. Copyright Elizabeth Stapel 1999-2009 All Rights Reserved
By the way, for doing operations like this, you'll probably want to use a lot of scratch paper, so you can be careful with your calculations. Or else you'll want to figure out how to have your graphing calculator do the messy parts. For instance, my calculator can do the "2R1 + R2" operation like this:
Check your manual for the instructions for your model of calculator.
Returning to that last matrix above, this is what the full calculation would look like: Hand-in work: Scratch-work:
Remember to do the scratch-work on scratch paper, not in the margins of your homework. You would hand in the following:
The above matrix calculations correspond to solving the linear system "x + 2y the solution "x = 1, y = 1".
= 1, 2x + 3y = 5" to get
It's fairly simple to learn the three matrix row operations, but actually doing the operations can be frustrating. It is amazingly easy to make little arithmetic errors that mess up all your calculations. So do your work very clearly, plainly denoting the row operations you are doing. The neater you are, the more likely you'll be to get the right answer in the end, but even if you get a wrong answer, neat work is a lot easier to check. If you are required to do the work by hand but you have a graphing calculator, ask your instructor if it's okay to do the operations in the calculator (instead of on scratch paper), just writing down the steps you took and copying down the results, because this can really cut down on the errors. Matrices are quite powerful and versatile, but good golly! are they annoying to do by hand!
Gaussian Elimination
(2)
(3)
Here, the column vector in the variables is carried along for labeling the matrix rows. Now, perform elementary row operations to put the augmented matrix into the upper triangular form
(4)
Solve the equation of the th row for , then substitute back into the equation of the row to obtain a solution for , etc., according to the formula
st
(5) In Mathematica, RowReduce performs a version of Gaussian elimination, with the equation being solved by
GaussianElimination[m_?MatrixQ, v_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{m, v}]]
LU decomposition of a matrix is frequently used as part of a Gaussian elimination process for solving a matrix equation. A matrix that has undergone Gaussian elimination is said to be in echelon form. For example, consider the matrix equation
(6)
(7)
Switching the first and third rows (without switching the elements in the right-hand column vector) gives
(8)
Subtracting 9 times the first row from the third row gives
(9)
Subtracting 4 times the first row from the second row gives
(10)
Finally, adding
(11)
(12)
which can be solved immediately to give , back-substituting to obtain actually follows trivially in this example), and then again back-substituting to find
(which
[1]
Write the given system as an augmented matrix. Examples of this step are below, or in specialized example "b", or in specialized example "c", or in our text Rolf (Pg 88). [system] ===> [ A | B ]
[2]
Convert [ A | B ] to REDUCED FORM: Pivot on matrix elements in positions 1-1, 2-2, 3-3, and so forth as far as is possible, in that order, with the objective of creating the biggest possible identity matrix I in the left portion of the augmented matrix. If one of these pivoting elements is zero, then first interchange it's row with a lower row. For all problems you will see this semester, this step [2] is equivalent to steps 1 through 7 on Pg 95-96 of Rolf, resulting in REDUCED FORM. [ A | B ] ===> [ I | C ]
[3]
When [2] is done, re-write the final matrix [ I | C ] as equations. C will be a (vertical) list of variable values which solve the system, as in the example below
Note 1: It is possible to vary the GAUSS/JORDAN method and still arrive at correct solutions to problems. For example, the pivot elements in step [2] might be different from 1-1, 2-2, 3-3, etc. Also, it is possible to use row operations which are not strictly part of the pivoting process. Students are nevertheless encouraged to use the above steps [1][2][3]. Note 2: Professor M Farland names row operations just a bit differently from our text: c follow Prof M Farland's naming style. Note 3: Compare the steps of G / J with those for finding < a href="inverse.htm">matrix inverses. One simple example of G/J row operations is offered immediately above the pivoting reference; an example is below: Below is a system of equations which we will solve using G/J Below is the 1st augmented matrix : pivot on the "1" encircled in red Row operations for the 1st pivoting are named below Next we pivot on the number "5" in the 2-2 position, encircled below
c
step 1
Below is the result of Row performing P1 on the operations element in the 2-2 position. of P2 Next we must perform P2 are below
The result of the 2nd Using P1 pivoting is below. below Now pivot on "-7" we change encircled in red "-7" to "1"
Below is the result of performing P1 on "-7" in the 3-3 position. Next we must perform P2
The result of the Step third (and last) [3] pivoting is below of with 3x3 ISM G/J matrix in blue
Re-writing the final matrix as equations gives the solution to the original system
We use the in the upper-left as a pivot and scale the top row by
Next, we clear out the first column with shears. Note that the shear value is now just the negative of the entry in the first column.
So far its almost the same as before, except for normalizing the pivot. But now when we choose the in the second column as the pivot we scale its row by
and use shears to clear the column above and below this pivot
Now the only choice for a pivot in the third column is to clear the column
This matrix is in reduced row echelon form, and I say that it is unique. Indeed, we are not allowed to alter the basis of the input space (since that would involve elementary column operations), so we can view this as a process of creating a basis for the output space in terms of the given basis of the input space. What we do is walk down the input basis, vector by vector. At each step, we ask if the image of this basis vector is linearly independent of the images of those that came before. If so, this image is a new basis vector. If not, then we can write the image (uniquely) in terms of the output basis vectors weve already written down. In our example above, each of the first three input basis vectors gives a new output basis vector, and the image of the last input basis vector can be written as the column vector in the last column. The only possibility of nonuniqueness is if we run out of input basis vectors before spanning the output space. But in that case the last rows of the matrix will consist of all zeroes anyway, and so
it doesnt matter what output basis vectors we choose for those rows! And so the reduced row echelon form of the matrix is uniquely determined by the input basis we started with. September 3, 2009 - Posted by John Armstrong | Algebra, Linear Algebra | | 7 Comments
7 Comments
1. [...] Linear Group Okay, so we can use elementary row operations to put any matrix into its (unique) reduced row echelon form. As we stated last time, this consists of building up a basis for the image of the transformation [...] Pingback by Elementary Matrices Generate the General Linear Group The Unapologetic Mathematician | September 4, 2009 | Reply 2. Do we result in the same matrix in reduced row echelon form for A^T (that is A transposed)? Will it have the same number of non-zero rows? Comment by Kuldeep | September 7, 2009 | Reply 3. Well, you obviously cant get the same matrix, because the reduced row echelon form must be the same shape as the original matrix. In the example above, the reduced row echelon form of would be a matrix, not a matrix? So could the reduced row echelon form of be the transpose of the reduced row echelon form of ? Well, clearly this is impossible as well. Again, looking at the example above, the transpose of the reduced row echelon form is the matrix
which is most definitely not the reduced row echelon form of anything. Now, what you could do is define something like the reduced column echelon form, replacing elementary row operations everywhere with elementary column operations. Then it would be the case that the reduced row echelon form of is the transpose of the reduced column echelon form of . Comment by John Armstrong | September 7, 2009 | Reply 4. Hi, I am looking at the construction of a holomorphic atlas on a Grassmannian of a complex vector space. Essentially this involves associating a unique k x (n-k) matrix with a k x n matrix of rank k. Books seem to assumed that the k x n matrix can always be transformed
to reduced row echelon form(by Gauss Jordan Elimination?) by premultiplication by a non-singular k x k matrix (presumably corresponding to elementary row operations). Is this always true and is the transforming matrix always non-singular? (Necessary if the result represents the same k-plane). If the k x n matrix is of rank k, does this ensure that the transformed matrix starts with the k x k identity matrix? There are perfectly good proofs that you can transform the k x n matrix in this way to have a k x k minor equal to the identity but to change this to the RRE form would involve column operations which would correspond to a change of basis in the intial space. Comment by Noel Robinson | November 3, 2009 | Reply 5. Yes, thats exactly the point. The elementary row operations correspond to elementary matrices, which are nonsingular (unless you try to scale a row by zero). The result may not start with the identity matrix unless you also allow elementary column operations. For instance, the matrix
Is in reduced row echelon form, and has rank , but does not start with the matrix. Comment by John Armstrong | November 3, 2009 | Reply
identity
6. Thank you for the response and confirming my suspicion that a change of basis would be required. Actually, I do not think that this change of basis is necessary to construct a chart on a Grassmannian as you can just pick out the k x (n-k) minor to determine the coordinate without needing the representative to be in the form (I|A). Comment by Noel Robinson | November 3, 2009 | Reply
Matrix Multiplication by Jin Tao, Mccoy Jen, and Erica Kim Section 1: Reading a Matrix
Given a matrix A that has m rows and n columns, then the entry in the ith row and jth column of A is aij and is called the (i, j) entry of A: Column j
Row i
=A
Section 2: Matrix addition and scalar multiplication Two matrices can only be added if they are of equal size, meaning both matrix A and matrix B have m rows and n columns. Example:
Addition and Scalar Multiplication Theorems: If A, B, and C be matrices of the same size, and r and s are scalars. 1. 2. 3. 4. 5. 6. A+B=B+A B. (A + B) + C = A + (B + C) A+0=A r(A + B) = rA + rB (r + s)A = rA + sA r(sA) = (rs)A
Section 3: Matrix multiplication Definition: If A is an m x n matrix, and if B is an n x p matrix with columns b1, . . . , bp, then the product AB is the m x p matrix whose columns are Ab1, . . . , Abp, that is, AB = A[b1 b2 . . . bp] = [Ab1 Ab2 . . . Abp] Move mouse over the numbers to see an example
Row-Column Rule for Computing AB: If the product AB is defined, then the entry in row i and column j of AB is the sum of the products of corresponding entries from row i of A and column j of B. If (AB)ij denotes the (i,j)-entry in AB, and if A is an m x n matrix, then:
Matrix Multiplication Theorems: If A is a matrix with m rows and n columns, and matrices B and C have sizes that do not produce errors for the following examples, then: 1. 2. 3. 4. A(BC)= (AB)C A(B + C) = AB + Ac (B + C)A = BA + CA r(AB) = (rA)B = A(rB)
5. A = A = AIn IMPORTANT: For a matrix A to be multiplied with matrix B, the number of columns in A MUST be equal to the number of rows in matrix B. This means that: 1. AB may not always be equal to BA (move mouse over numbers to see reason):
2. Cancelation laws are not true for matrix multiplication: if AB = AC, then it is not always true that B = C 3. If the product of AB is a zero matrix, it is not always true that A = 0 or B = 0 Section 4: Matrix Powers If A is a square matrix with dimensions n x n, then Ak is the product of k comies of A. If k=0, then A0 is the identity matrix of A. Section 5: Transpose of Matrix If A is an m x n matrix, the transpose of A is an n x m matrix denoted by AT where the columns are formed by the rows of A. Example:
1. 2. 3. 4.
Section 6: Partitioned Matrices One of the key things in linear algebra is the ability to split up a bigger matrix into smaller subsections, which is also known as partitioning a matrix. For an instance, we have a matrix A:
Can be partitioned into its subsections based on the users discretion. Here is a partitioned Matrix:
Section 7: Addition and Scalar Multiplication of Partitioned Matrices Addition works the same way with partition matrices.
As long as the matrices are partitioned in the same manner and have the same size, the laws of addition for matrices still hold true.
==>
Scalar Multiplication on a partition matrix works the same way as multiplication of a scalar on a regular matrix as well.
Section 8: Multiplication of Partitioned Matrices The purpose of partition up a matrix for multiplication is to split up a bigger matrix into smaller chunks, which makes it easier to compute. For an instance, take this example:
or
becomes:
and
The Multiplication still works the same way, just within the partitions themselves:
==>
The key is that when you substitute back in the numbers, matrices are being substituted back in and not just a single number.
And
It is from these components that you multiply out and get other matrices then you add up all the individual bits to get the full answer