OPTIONAL Gaussian elimination uses one elegant trick, the EROs (elementary row operations), to combine two equations to get a simpler one that is also true (consistent with the first two). If you ever used the technique of "eliminating variables" to solve systems of linear equations, it turns out the two techniques are related. Example: Flashback to High School...."Eliminate the Variable" Here's a system of two eqns in two unknowns, x and y: 1. ax + by = c 2. dx + ey = f Let's eliminate x from 2. In 1, we subtract by from both sides and divide by a to get c - by 3. x = ------ a Now we substitute this expression for x into 2: c - by d ( -------- ) +ey = f, which is the same as a cd - bdy eay -------- + ---- = f, or a a cd - bdy + eay = af, so 4. y( ea - bd) = af - cd, which, dividing both sides by a, is 5**. (e - bd/a) y = (f - cd/a) (** means 'remember') Ignoring 5. for now, 4. leads immediately to af - cd 6**. y = --------- . ea - bd Now we can evaluate y since we *know* a,b,c,d,e,f. We do that, get a numeric value for y, and substitute that back into 3, which gives us a number for x since we know a,b,c. Values for x,y found, we're done. Seem familiar? ---------------- Back to the Present....Gaussian Elimination First, Forward Elimination: Our problem in matrix form looks like a b x c d e y = f, or A x = v GE works with rows of the coefficient matrix C and also mirrors its operations on the row's constant element, thus treating them as one big "row". So our problem looks like this, with each row representing an equation (coefs and constants) in an obvious way: 7. a b c 8. d e f Elementary Row Ops (EROs) are: * multiply a row by a constant * add two rows * swap two rows (can implement with above two EROs). Each ERO gives us a new row representing an equation that is equivalent to one of the equations in the original system. Our goal is to set a coefficient in one of the rows equal to 0 using EROs, thus producing an equation in which the variable associated with that coefficient does not appear (it's multiplied by 0...it's eliminated). ERO 1: Multiply 7. by -d/a: new row is 9. -d -bd/a -cd/a ERO 2: Add 8 and 9: new row is 10. 0 e-(bd/a) f-(cd/a). Reinterpreted as an equation, row 8 is 11**. 0x + (e - bd/a)y = f - (cd/a), which is the same as 5** ! Second, the Back Substitution phase. It starts with finding a value for y: we want to divide the right hand side of 11. by the coefficent of y on the left hand side, and it's easy to see from 11 (although our algorithm does not compute this intermediate step, since both the coefficient and the right-hand sides are now simply numbers in the last row of the upper-triangular matrix and the right-hand-side vector respectively) that ea - bd af - cd ------- y = -------, so a a af - cd 12**. y = --------. ea - bd Compare 12. to 6. This division is what the back-substitution phase of Gaussian Elimination computes first, then it goes on to use the value of y to compute x (and continues if there are more than 2 variables, of course). Moral: Producing the upper triangular matrix in Gaussian Elimination is preparatory work for eliminating the variables in turn, and GE is just a version of the technique we saw back in high school.