Lower-Upper Matrix Decomposition in C#

A lot of scientific and numerical programming involves working with matrices and a common matrix operation is to invert a matrix. In case you’ve forgotten, the inverse of some square matrix M is another square matrix M(-1) where M *M(-1) = M(-1) * M = I the identity matrix (a square matrix with 1s on the diagonal, 0s elsewhere). For example the inverse of

1   3   1
1   1   2
2   3   4


-2  0   1
9   2  -3
5   1  -2

There are several ways to compute the inverse of a matrix. Most math classes teach you a technique where you find the determinant of M, then compute a bunch of so-called minors of M, then compute a matrix of cofactors of M, then compute the adjoint matrix, and then finally use the adjoint and the determinant to compute the inverse of M. It turns out that translating this process into code is moderately difficult and quite inefficient. A better but totally un-obvious approach is to compute a so-called lower-upper decomposition of M and then use the decomposition to find the inverse. In essence you find two matrices which when multiplied together give you the original matrix. The lower matrix has all 0s in the upper right, and the upper matrix has 0s in the lower left. The lower-upper decomposition is best explained by example. Here’s one I found at http:// tutorial.math.lamar.edu/ Classes/ LinAlg/ LUDecomposition.aspx. Suppose M is

3   6  -9
2   5  -3
-4  1  10

then L is

3   0   0
2   1   0
-4  9  -29

and U is

1   2  -3
0   1   3
0   0   1

Once you have L and U you can use them to compute the inverse of M using a technique called back substitution. There are a lot of tricky details I’ve left out. I found several algorithms for lower-upper decomposition and I’m in the process of implementing a routine in C#.

This entry was posted in Machine Learning. Bookmark the permalink.