Inverse Matrices

From bio-physics-wiki

Jump to: navigation, search

In the article on LU Decomposition we found that, Matrix Elimination can be done by multiplying by matrices $\mathbf{E}_{21}$, for example \begin{align} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{bmatrix} \begin{bmatrix} {1} & 2 & 1 \\ 0 & {2} & -2 \\ 0 & 4 & 1 \end{bmatrix}= \begin{bmatrix} {1} & 2 & 1 \\ 0 & {2} & -2 \\ 0 & 0 & {5} \end{bmatrix} \end{align} What is the matrix, that instead of subtracting two times row $2$ from row $3$ does the inverse operation? The matrix we are searching for will add two times row $2$ to row $3$ and is called the inverse of $\mathbf{E}_{21}$. \begin{align} \mathbf{E}_{21}^{-1} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{bmatrix} \end{align} Doing both operations, gives $\mathbf{A}$ back again \begin{align} \mathbf{E}_{21}^{-1}\mathbf{E}_{21}\mathbf{A}=\mathbf{A} \end{align} so $\mathbf{E}_{21}^{-1}\mathbf{E}_{21}$ is just the identity \begin{align} \mathbf{E}_{21}^{-1}\mathbf{E}_{21}=\mathbf{I} \end{align} \begin{align} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{align} which leaves $\mathbf{A}$ as it is. To find the inverse of $\mathbf{E}_{21}$ was easy, but how do we find the Inverse of a general matrix $\mathbf{A}$, so that $\mathbf{A}^{-1}\mathbf{A}$ is just the identity

\begin{align} \mathbf{A}^{-1}\mathbf{A}=\mathbf{I} \end{align}

For square matrices ($n \times n$-matrices) the order of multiplication is not important and multiplying with $\mathbf{A}^{-1}$ on the right gives the identity as well.

\begin{align} \mathbf{A}\mathbf{A}^{-1}=\mathbf{I} \end{align}

Can we always find an inverse? Certainly not. For singular matrices (one column/row is a multiple of the other), we can't find an inverse. Look at the matrix equation \begin{align} \begin{bmatrix} 1 & 3 \\ 2 & 6 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{align} We can find a vector $(x,y)=(3,-1)$, that satisfies the equation, since three times the first column minus the second column is indeed zero. However, if there would be an inverse and we would multiply both sides by it, what we get is \begin{align} \mathbf{A}^{-1}\mathbf{A}\mathbf{x}=\mathbf{A}^{-1}\mathbf{0}=\mathbf{0} \end{align} So the solution would only be the zero vector, but we already found a non-zero solution $(x,y)=(3,-1)$, so there is no inverse. Let's suppose the other case for a non-singular matrix e.g. \begin{align} \mathbf{A}=\begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix} \end{align} What is the inverse of this matrix? \begin{align} \underbrace{\begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix}}_{\mathbf{A}} \underbrace{\begin{bmatrix} a & b \\ c & d \end{bmatrix}}_{\mathbf{A}^{-1}} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{align} We can interpret this equation as two equations \begin{align} \begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix} \begin{bmatrix} a \\ c \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \tag{1} \end{align} \begin{align} \begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix} \begin{bmatrix} b \\ d \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \tag{2} \end{align} and find the inverse by solving them. But there is more convenient method that allows to solve for the inverse, called the Gauß-Jordan method.


Gauß-Jordan Method

Remember that to solve (1) and (2) we could use Matrix Elimination. We would write the matrix $\mathbf{A}$ with the vector $\mathbf{b}$ augmented, and we would start with the elimination algorithm. But why not tuck on the second vector as well. \begin{align} \begin{array}{rr|rr} 1 & 3 & 1 & 0 \\ 2 & 7 & 0 & 1 \end{array} \end{align}
and do elimination for both equations at once, this is the idea of the Gauß-Jordan method. \begin{align} \begin{array}{rr|rr} 1 & 3 & 1 & 0 \\ 2 & 7 & 0 & 1 \end{array} \overset{row2-2 \cdot row1}{\longrightarrow} \begin{array}{rr|rr} 1 & 3 & 1 & 0 \\ 0 & 1 & -2 & 1 \end{array} \end{align}
In Matrix Elimination we would stop here, but we proceed until the identity matrix appears on the left \begin{align} \begin{array}{rr|rr} 1 & 3 & 1 & 0 \\ 0 & 1 & -2 & 1 \end{array} \overset{row2-2 \cdot row1}{\longrightarrow} \begin{array}{rr|rr} 1 & 0 & 7 & -3 \\ 0 & 1 & -2 & 1 \end{array} \end{align}
Let us check if this is really the inverse matrix \begin{align} \begin{bmatrix} 1 & 3 \\ 2 & 7 \end{bmatrix} \begin{bmatrix} 7 & -3 \\ -2 & 1 \end{bmatrix} =\begin{bmatrix} 7-6 & 14-14 \\ 3-3 & -6+7 \end{bmatrix}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{align} so indeed \begin{align} \begin{bmatrix} 7 & -3 \\ -2 & 1 \end{bmatrix} \end{align} is the inverse we were searching for.


Inverse of Products

Sometimes we deal with products of matrices $\mathbf{AB}$ and we would like to know the inverse of the product of matrices. By inverting a product the order of multiplication changes. \begin{align} \mathbf{AB}(\mathbf{AB})^{-1}=\mathbf{AB}\mathbf{B}^{-1}\mathbf{A}^{-1} \end{align}

Inverse of a Transpose

If we transpose the equation \begin{align} \mathbf{A}\mathbf{A}^{-1}=\mathbf{I} \end{align} we have \begin{align} \mathbf{A}^T(\mathbf{A}^{-1})^T=\mathbf{I} \end{align} This tells us that the inverse of $\mathbf{A}^T$ is the transpose of the inverse. For the above example we have \begin{align} \mathbf{A}^T=\begin{bmatrix} 1 & 2 \\ 3 & 7 \end{bmatrix} \end{align} and the inverse of this matrix, by what we just found must be \begin{align} (\mathbf{A}^{-1})^T=\begin{bmatrix} 7 & -2 \\ -3 & 1 \end{bmatrix} \end{align} which is readily checked \begin{align} \begin{bmatrix} 1 & 2 \\ 3 & 7 \end{bmatrix}\begin{bmatrix} 7 & -2 \\ -3 & 1 \end{bmatrix} =\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{align} This means it doesn't matter if we first take the inverse and then transpose, or vise versa.



Video Lectures:

  • Gilbert Strang - Introduction to Linear Algebra Lec. 3
  • Gilbert Strang - Introduction to Linear Algebra Lec. 4