Matrices

Matrices are usually introduced as rectangular arrays of numbers, along with what looks like a sensible notion of addition, and a somewhat peculiar notion of multiplication.

Individual matrices are often denoted by upper case letters, and have an associated size. A matrix A with m rows and n columns is said to be m-by-n, where m and n are positive integers. The 1-by-1 matrix is defined, and acts in some ways, like a single number. 1-by-n and n-by-1 matrices are often used to represent row and column vectors respectively. Vectors, even though they can be thought of as matrices are often denoted by lower case letters, since there is often an important semantic distinction between objects represented by vectors, and objects represented by full matrices.

The individual entries of a matrix are generally numbers, or scalars in mathematical parlance. They are most often referred to by subscripted lower case letters, for example, aij refers to the element of matrix A at row i, column j. Other notations such as Aij and A[i,j] are also sometimes used. The "numbers" may be integer, rational, real, or complex values (and are sometimes more exotic objects). In this course, we generally consider matrices to contain either real or complex values.

Examples of Matrices

[ 7   3   2 ]      [-1.23   2.71   6.43  8.34 ]      [ 2 + 3j ]
[ 1  17  16 ]      [ 2.22   3.14  -2.71  1.41 ]      [ 8 - 7j ] 
[ 4  23  13 ]      [ 7.66  -1.77  -1.49  3.27 ]      [-4 + 5j ]

Addition is defined between matrices of the same size. Specifically, the addition of two m-by-n matrices produces a third m-by-n matrix whose elements are the sum of the numbers at corresponding locations in the addend matrices. That is, C = A + B is defined by

cij = aij + bij.

This is sometimes referred to as "pointwise" addition or "addition by components". Because it is defined by addition of components, matrix addition is commutative, A + B = B + A, and associative, (A + B) + C = A + (B + C), just like ordinary addition of numbers.

Example of Matrix Addition

[ 3   4   5 ]     [ 2  -1  -1 ]     [ 5   3  -4 ]
[-1   7  -2 ]  +  [ 7   0  -4 ]  =  [ 6   7  -6 ]
[ 5   0  -3 ]     [ 4   8   5 ]     [ 9   8   2 ]

Pointwise multiplication can be defined similarly, but this turns out not to be a very useful concept. Multiplication of a matrix by a scalar is a more frequently used concept, and is achieved by multiplying every matrix element by the scalar. That is C = kA where C and A are matrices and k is a scalar, is defined by

cij = k * aij
For example:
         [ 1.0    2.0 ]     [ 3.14   6.28 ]
3.14  *  [ 3.0   -1.0 ]  =  [ 9.42  -3.14 ]
         [-3.0   -2.0 ]     [-9.42  -6.28 ]

Multiplication of matrices by matrices is carefully defined so that matrix multiplication can be used to represent systems of linear equations. It so happens that once this is done, matrix multiplication can be used to represent other relationships as well. To provide some intuition behind the definition, recall that linear equations take the general form

a1x1 + a2x2 + ... + anxn = c.

Note that the left side can be viewed as the dot product between two n-component vectors a and x. Matrix multiplication is defined so that element (i,j) of the product AB is the dot product of the i-th row of A with the j-th column of B. Note that his implies that the number of columns of A must equal the number of rows of B. Thus if A is an m-by-k matrix, and B is a k-by-n matrix, then the product C = AB is an m-by-n matrix defined by

cij = ∑k aik bkj

Examples of Matrix Multiplication

[ 1   2   3 ]     [ 1   1   1 ]     [-2   3   8 ]
[ 3   2   1 ]  *  [ 0   1   2 ]  =  [ 2   5   8 ]
[ 1   0   1 ]     [-1   0   1 ]     [ 0   1   2 ]
                  [ 1 ]
[ 1   2   3 ]  *  [ 2 ]  =  [ 14 ]
                  [ 3 ]
[ 1 ]                              [ 1   2   3   4   5 ]
[ 2 ]                              [ 2   4   6   8  10 ]
[ 3 ]  *  [ 1   2   3   4   5]  =  [ 3   6   9  12  15 ]
[ 4 ]                              [ 4   8  12  16  20 ]
[ 5 ]                              [ 5  10  15  20  25 ]

The second and third examples represent what are sometimes called respectively, inner and outer products of vectors (in this case, with themselves). Calling the second example an inner product is a slight misnomer, as the result is a 1-by-1 matrix, not a scalar, which is subtly different (see the discussion below). In the third example, the rows (and columns) are multiples of each other. This reflects the fact that we really did not start with much information, and even though we produced a big matrix, it is, in a sense that can be made precise, redundant.

With this definition of matrix multiplication, the form AX = C (often written Ax = c), where A is an m-by-n matrix of coefficients, X is an n-by-1 matrix representing a column vector of unknowns, and C is an m-by-1 matrix representing a column vector of constant terms is defined and generates a system of m equations in n unknowns when the multiplication is carried out symbolically.

Matrix multiplication is associative, (AB)C = A(BC) (try proving this for an interesting exercise), but it is NOT commutative, i.e.,
AB is not, in general, equal to BA, or even defined, except in special circumstances. One of these circumstances is 1-by-1 matrixes for which addition and multiplication act just like addition and multiplication of the contained element. A formal way of stating this is to say that the algebraic systems of addition and multiplication of scalars, and addition and multiplication of 1-by-1 matrices of those scalars are isomorphic under the natural mapping.

Note that this is not the same as saying that 1-by-1 matrices are the same as scalars. They are not. Specifically, multiplication of an arbitrary matrix by a scalar is defined, but multiplication of an arbitrary matrix by a 1-1 matrix is not (the sizes will not match in general). However, note that a column vector C can be multiplied on the right by a 1-by-1 matrix [k], C[k], and a row vector R can be multiplied on the left, [k]R. The result in this case corresponds to scalar multiplication. This sometimes results in shorthand notation where it looks as if a 1-by-1 matrix has been treated as a scalar. In fact, the whole issue is sometimes swept under the rug, and expressions such as xty, where x and y are column vectors of the same size, are used to represent the dot product x ⋅ y which has a scalar value. The "t" superscript means matrix transpose, which is what you get when you exchange the rows and columns of a matrix, so an m-by-n marix becomes an n-by-m.

Matrix Algebra

The definitions of matrix addition and multiplication allow square matrices of the same size to be added and multiplied to produce a square matrix also of the same size. This suggests the idea that matrices could be considered as a generalization of the concept of a number. We can follow this up by seeing how many analogous properties we can find.

  1. Matrices are closed under addition: the sum of two matrices is a matrix.
  2. We have already noted that matrix addition is commutative, just like addition of numbers, i.e.
    A + B = B + A.
  3. Also that matrix addition, like addition of numbers, is associative, i.e., (A + B) + C = A + (B + C).
  4. The matrix of all zeros added to any other matrix is the original matrix, that is, A + [0] = A and this is the only such matrix. Thus there is unique additive identity matrix analogous to the number zero.
  5. For any matrix, the matrix whose terms are the negation of the terms of the original yields the zero matrix when added to it. That is, for any matrix A, there is a matrix (-A), such that A + (-A) = [0]. This is the only such matrix. Thus, just like for numbers, we have a unique additive inverse.
  6. If we can multiply two matrices, the product is a matrix: matrices are closed under multiplication.
  7. As noted above, matrix multiplication, like that of numbers, is associative, that is, (AB)C = A(BC). Unlike numbers, matrix multiplication is not generally commutative (although some pairs of matrices do commute).
  8. The matrix consisting of 1s along the main diagonal and 0s elsewhere, when multiplied by a square matrix of the same size on the right or left yields the original matrix. Such a matrix is referred to as the identity matrix, I, and is unique for a given size. The condition is usually written as AI = A = IA. There is thus a unique, multiplicative identity matrix analogous to the number 1. For example, the 3x3 identity matrix is:
    [ 1   0   0 ]
    [ 0   1   0 ]
    [ 0   0   1 ]
    
    The identity matrix also preserves vectors, Ix = x, or any other matrix for which multiplication with it is defined.
  9. It turns out that for "most", but not all square matrices, there is a matrix that, when multiplied on the left or right by the original matrix, yields the identity matrix. Such a matrix is called an inverse, A-1, and the condition is written AA-1 = I = A-1A. Such an inverse, if it exists, is unique, and every left inverse is also a right inverse, and thus an inverse (and vice versa). It can be shown that a matrix A has an inverse if and only if some system of the form Ax = c has a unique solution, and furthermore, if there is a unique solution for any c, there is a unique solution for every c. Matrices for which the corresponding linear system either has no, or many solutions (and thus no inverse exists) are called singular. Thus, like numbers, square matrices usually have a unique inverse. The added complexity is that not only the 0 element, but the larger class of singular matrices have no inverse.
  10. Matrix multiplication distributes over addition both on the left, and the right.
    That is, A(B + C) = AB + AC, and (A + B)C = AC + BC. This is analogous to the distributive property of multiplication over addition for numbers.

Taking all the above properties together, we note that with the exception of commutative multiplication and some additional elements without inverses, the set of square matrices of a given size acts just like numbers with respect to the basic algebraic operations of addition and multiplication. Mathematicians have taken note of this, (and many other examples of sets of objects plus operations with similarly analogous structure that they have discovered), and have developed an entire area of mathematics devoted to them. This area is called abstract algebra, and it is at the foundation of modern physics, and a big chunk of modern mathematics. Below, we mention a few of the concepts by way of general interest (don't worry, you won't be asked to define them on the exam).

Flashes in the Dark: A Few Concepts from Abstract Algebra

For example, any set and associated binary operation that satisfies closure, associativity, identity element, and unique inverse is referred to as a group. If the operation is commutative, the group is an abelian group. Familiar groups are the integers, rationals, reals, and complex numbers (and integers mod n) under addition. Square matrices of a given size are thus a group under addition.

A system with a set and two binary operations (addition and multiplication) that satisfies all of our conditions plus commutivity of multiplication and unique multiplicative inverse for all elements except the additive identity (0) is referred to as a field. Familiar fields are the rationals, the reals, and the complex numbers with multiplication and addition, but not the integers (why?). Fields turn out to be relatively rare compared to groups among mathematical systems that come up in engineering, physics, and Euclidean geometry. The square matrices are almost, but not quite a field. They are, in fact an example of an algebraic structure called a ring, which requires all the field properties except commutative multiplication and multiplicative inverses. The non-singular matrices are even closer (lacking only commutative multiplication), and belong to an algebraic class referred to as a division ring.

Mathematicians are interested in proving theorems about groups, rings, fields, etc. in general. The payoff is, that any property that can be proved for, say, rings in general, automatically applies to anything that is a ring. Since matrices have a lot of the abstract algebraic properties of numbers, we might look at other operations and theorems that are defined for numbers and ask if there are useful analogs in the matrix domain. For example, what about the exponential function of a matrix (e to a matrix power)? If we think about the exponential as a generalization of repeated multiplication, the idea doesn't even seem to make sense (e multiplied by itself a matrix number of times??). However, it turns out that a matrix exponential can not only be defined (in a rather direct manner that will make sense to you if you have had a semester of calculus), but represents the solution of some important equations in multi-dimensional spaces that are exactly analogous to equations with exponential solutions in one dimension.

Or what about the square root of a matrix? Is it uniquely defined? (probably not, given that it is not unique even for real numbers). Is it usefully defined at all? If so, how many could there be? Can matrix calculus be defined? Are three (or more) dimensional things like matrices a useful concept? (yes, they go under the name of tensors, and you will encounter them eventually). What are their properties? ...

OK, relax. We are not intending to go off where the mathematicians go, at least not very far, but the point is that matrices (and complex numbers, and quaternions, and many other mathematical constructs that may seem exotic and strange when you first encounter them, are not just arbitrary heuristics for solving a specific problem (e.g systems of linear equations) but often turn out to have a lot of structure, much of which may be somewhat familiar due to analogs with numbers (or vectors, or matrices once you have developed a feel for them).

Back to Reality

To return to the concrete, the convention we have developed for the representation of linear systems, Ax = c, has an algebraic form that is identical to the simple equation ax = c with x a simple variable. A solution to the latter can be written in closed form, x = c/a = a-1c. If A is nonsingular we can multiply both sides of the matrix equation on the left by A-1: A-1Ax = x= A-1c. So if we can find the inverse matrix, we can solve the system by direct matrix multiplication.

It turns out that finding the inverse is as much work as solving the system by Gaussian reduction (in fact, a direct modification of Gaussian reduction is a standard way of finding the inverse), so we don't save any computational effort. However, algebraic manipulations of equations involving matrices and vectors can simplify the form before any computation is done, just as with ordinary equations. This can save considerable computational effort, and may also generate a representation that is more easily understandable, or displays structure not immediately evident in the original form. A good example of this is the matrix notation itself, which is marvelously compact, hiding a lot of detail that is irrelevant until it is time to make a final computation, but which makes structures and operations in multi-dimensional space more understandable by expressing them in a form that is analogous to the familiar, and more intuitively understandable one-dimensional form.

Important Concepts and Characteristics of Matrices

There are a number of concepts and characteristics that come up repeatedly when dealing with matrices and linear systems. Some of these are listed below. Some of these have already been mentioned above. Pointers into Wikipedia are given for those interested in learning more.

If you want to know even more about matrices check out the Wikipedia article on them.