web analytics
Skip to main content

Section 3.1 Invertibility

Activity 3.1.0.1.

Let's consider the matrices
\begin{equation*} A = \left[\begin{array}{rrr} 1 \amp 0 \amp 2 \\ 2 \amp 2 \amp 1 \\ 1 \amp 1 \amp 1 \\ \end{array}\right], B = \left[\begin{array}{rrr} 1 \amp 2 \amp -4 \\ -1 \amp -1 \amp 3 \\ 0 \amp -1 \amp 2 \\ \end{array}\right]\text{.} \end{equation*}

(a)

Define these matrices in Sage and verify that \(BA = I\) so that \(B=A^{-1}\text{.}\)

(b)

Find the solution to the matrix equation \(A\xvec = \threevec{4}{-1}{4}\) using \(A^{-1}\text{.}\)

(c)

Using Sage, multiply \(A\) and \(B\) in the opposite order; that is, what do you find when you evaluate \(AB\text{?}\)

Activity 3.1.0.2.

Suppose that \(A\) is an \(n\times n\) invertible matrix with inverse \(A^{-1}\text{.}\) This means that every equation of the form \(A\xvec=\bvec\) has a solution, namely, \(\xvec = A^{-1}\bvec\text{.}\) Which of the following best describes a restatement of this fact?
  1. The columns of \(A\) are linearly independent.
  2. The columns of \(A\) span \(\real^3\text{.}\)

Activity 3.1.0.3.

Suppose that \(A\) is an \(n\times n\) invertible matrix with inverse \(A^{-1}\text{.}\) This means that every equation of the form \(A\xvec=\bvec\) has a solution, namely, \(\xvec = A^{-1}\bvec\text{.}\) What can you conclude about the pivot positions of the matrix \(A\text{?}\)
  1. Every column of \(A\) has a pivot position.
  2. Every row of \(A\) has a pivot position.
  3. Every row and every column of \(A\) has a pivot position.

Activity 3.1.0.4.

If \(A\) is an invertible \(4\times4\) matrix, what is its reduced row echelon form?

Activity 3.1.0.5.

In this activity, we let
\begin{equation*} A = \left[\begin{array}{rr} 1 \amp 2 \\ 1 \amp 3 \\ \end{array}\right] \end{equation*}
and construct its inverse \(A^{-1}\text{.}\) For the time being, let's denote the inverse by \(B\) so that \(B=A^{-1}\text{.}\)

(a)

We know that \(AB = I\text{.}\) If we write \(B = \left[\begin{array}{rr}\bvec_1\amp \bvec_2\end{array}\right]\text{,}\) then we have
\begin{equation*} AB = \left[\begin{array}{rr} A\bvec_1 \amp A\bvec_2 \end{array}\right] = \left[\begin{array}{rr} \evec_1 \amp \evec_2 \end{array}\right] = I\text{.} \end{equation*}
This means that we need to solve the equations
\begin{equation*} \begin{aligned} A\bvec_1 \amp {}={} \evec_1 \\ A\bvec_2 \amp {}={} \evec_2 \\ \end{aligned}\text{.} \end{equation*}
Using the Sage cell below, solve these equations for the columns of \(B\text{.}\)

(b)

What is the matrix \(B\text{?}\) Check that \(AB = I\) and \(BA = I\text{.}\)

(c)

To find the columns of \(B\text{,}\) we solved two equations, \(A\bvec_1=\evec_1\) and \(A\bvec_2=\evec_2\text{.}\) We could do this by augmenting \(A\) two separate times, forming matrices
\begin{equation*} \begin{aligned} \left[\begin{array}{r|r} A \amp \evec_1 \end{array}\right] \amp \\ \left[\begin{array}{r|r} A \amp \evec_2 \end{array}\right] \amp \\ \end{aligned} \end{equation*}
and finding their reduced row echelon forms. But instead of solving these two equations separately, we could also solve them together by forming the augmented matrix \(\left[\begin{array}{r|rr} A \amp \evec_1 \amp \evec_2 \end{array}\right]\) and finding the row reduced echelon form. In other words, we augment \(A\) by the matrix \(I\) to form \(\left[\begin{array}{r|r} A \amp I \end{array} \right] \text{.}\)
Form this augmented matrix and find its reduced row echelon form to find \(A^{-1}\text{.}\)
Assuming \(A\) is invertible, we have shown that
\begin{equation*} \left[\begin{array}{r|r} A \amp I \end{array}\right] \sim \left[\begin{array}{r|r} I \amp A^{-1} \end{array}\right]\text{.} \end{equation*}

Activity 3.1.0.6.

If you have defined a matrix \(A\) in Sage, you can find it's inverse as A.inverse(). Use Sage to find the inverse of the matrix
\begin{equation*} A = \left[\begin{array}{rrr} 1 \amp -2 \amp -1 \\ -1 \amp 5 \amp 6 \\ 5 \amp -4 \amp 6 \\ \end{array}\right]\text{.} \end{equation*}

Activity 3.1.0.7.

What happens when we try to find the inverse of the matrix
\begin{equation*} \left[\begin{array}{rr} -4 \amp 2 \\ -2 \amp 1 \\ \end{array}\right]\text{?} \end{equation*}

Activity 3.1.0.8.

Suppose that \(n\times n\) matrices \(C\) and \(D\) are both invertible. What do you find when you simplify the product \((D^{-1}C^{-1})(CD)\text{?}\) Explain why this shows the product \(CD\) is invertible and \((CD)^{-1} = D^{-1}C^{-1}\text{.}\)
We have seen how to use Gaussian elimination to find the inverse of a matrix. Now we want to look at how to use matrix multiplication to perform Gaussian Elimination.

Activity 3.1.0.9.

Tweaking the identity matrix slightly allows us to write row operations in terms of matrix multiplication.

(a)

Create a matrix that doubles the third row of \(A\text{:}\)
\begin{equation*} \left[\begin{array}{ccc} \unknown & \unknown & \unknown \\ \unknown & \unknown & \unknown \\ \unknown & \unknown & \unknown \end{array}\right] \left[\begin{array}{ccc} 2 & 7 & -1 \\ 0 & 3 & 2 \\ 1 & 1 & -1 \end{array}\right] = \left[\begin{array}{ccc} 2 & 7 & -1 \\ 0 & 3 & 2 \\ 2 & 2 & -2 \end{array}\right] \end{equation*}

(b)

Create a matrix that swaps the second and third rows of \(A\text{:}\)
\begin{equation*} \left[\begin{array}{ccc} \unknown & \unknown & \unknown \\ \unknown & \unknown & \unknown \\ \unknown & \unknown & \unknown \end{array}\right] \left[\begin{array}{ccc} 2 & 7 & -1 \\ 0 & 3 & 2 \\ 1 & 1 & -1 \end{array}\right] = \left[\begin{array}{ccc} 2 & 7 & -1 \\ 1 & 1 & -1 \\ 0 & 3 & 2 \end{array}\right] \end{equation*}

(c)

Create a matrix that adds \(5\) times the third row of \(A\) to the first row:
\begin{equation*} \left[\begin{array}{ccc} \unknown & \unknown & \unknown \\ \unknown & \unknown & \unknown \\ \unknown & \unknown & \unknown \end{array}\right] \left[\begin{array}{ccc} 2 & 7 & -1 \\ 0 & 3 & 2 \\ 1 & 1 & -1 \end{array}\right] = \left[\begin{array}{ccc} 2+5(1) & 7+5(1) & -1+5(-1) \\ 0 & 3 & 2 \\ 1 & 1 & -1 \end{array}\right] \end{equation*}

Activity 3.1.0.10.

Consider the two row operations \(R_2\leftrightarrow R_3\) and \(R_1+R_2\to R_1\) applied as follows to show \(A\sim B\text{:}\)
\begin{align*} A = \left[\begin{array}{ccc} -1&4&5\\ 0&3&-1\\ 1&2&3\\ \end{array}\right] &\sim \left[\begin{array}{ccc} -1&4&5\\ 1&2&3\\ 0&3&-1\\ \end{array}\right]\\ &\sim \left[\begin{array}{ccc} -1+1&4+2&5+3\\ 1&2&3\\ 0&3&-1\\ \end{array}\right] = \left[\begin{array}{ccc} 0&6&8\\ 1&2&3\\ 0&3&-1\\ \end{array}\right] = B \end{align*}
Express these row operations as matrix multiplication by expressing \(B\) as the product of two matrices and \(A\text{:}\)
\begin{equation*} B = \left[\begin{array}{ccc} \unknown&\unknown&\unknown\\ \unknown&\unknown&\unknown\\ \unknown&\unknown&\unknown \end{array}\right] \left[\begin{array}{ccc} \unknown&\unknown&\unknown\\ \unknown&\unknown&\unknown\\ \unknown&\unknown&\unknown \end{array}\right] A \end{equation*}
Check your work using technology.
We have now seen how to do Gaussian elimination with matrix multiplication. If we are careful to describe our row operation with lower triangular matrices, then we can connect matrix multiplication to finding the inverse. We will see this through the next couple of activities.

Activity 3.1.0.11.

Let
\begin{equation*} A = \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 2 \amp 0 \amp -2 \\ -1 \amp 2 \amp -1 \\ \end{array}\right]\text{.} \end{equation*}
When performing Gaussian elimination on \(A\text{,}\) we first apply a row replacement operation in which we multiply the first row by \(-2\) and add to the second row. After this step, we have a new matrix \(A_1\text{.}\)
\begin{equation*} A = \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 2 \amp 0 \amp -2 \\ -1 \amp 2 \amp -1 \\ \end{array}\right] \sim \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 0 \amp -4 \amp -4 \\ -1 \amp 2 \amp -1 \\ \end{array}\right] = A_1\text{.} \end{equation*}

(a)

Show that multiplying \(A\) by the lower triangular matrix
\begin{equation*} L_1 = \left[\begin{array}{rrr} 1 \amp 0 \amp 0 \\ -2 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \\ \end{array}\right] \end{equation*}
has the same effect as this row operation; that is, show that \(L_1A = A_1\text{.}\)

(b)

Explain why \(L_1\) is invertible and find its inverse \(L_1^{-1}\text{.}\)

(c)

You should see that there is a simple relationship between \(L_1\) and \(L_1^{-1}\text{.}\) Describe this relationship and explain why it holds.

Activity 3.1.0.12.

To continue the Gaussian elimination algorithm, we need to apply two more row replacements to bring \(A\) into a triangular form \(U\) where
\begin{equation*} A = \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 2 \amp 0 \amp -2 \\ -1 \amp 2 \amp -1 \\ \end{array}\right] \sim \left[\begin{array}{rrr} 1 \amp 2 \amp 1 \\ 0 \amp -4 \amp -4 \\ 0 \amp 0 \amp -4 \\ \end{array}\right] = U\text{.} \end{equation*}

(a)

Find the matrices \(L_2\) and \(L_3\) that perform these row replacement operations so that \(L_3L_2L_1 A = U\text{.}\)

(b)

Explain why the matrix product \(L_3L_2L_1\) is invertible and use this fact to write \(A = LU\text{.}\) What is the matrix \(L\) that you find? Why do you think we denote it by \(L\text{?}\)

Activity 3.1.0.13.

Row replacement operations may always be performed by multiplying by a lower triangular matrix. It turns out the other two row operations, scaling and interchange, may also be performed using matrix multiplication. For instance, consider the two matrices
\begin{equation*} S = \left[\begin{array}{rrr} 1 \amp 0 \amp 0 \\ 0 \amp 3 \amp 0 \\ 0 \amp 0 \amp 1 \\ \end{array}\right], \hspace{24pt} P = \left[\begin{array}{rrr} 0 \amp 0 \amp 1 \\ 0 \amp 1 \amp 0 \\ 1 \amp 0 \amp 0 \\ \end{array}\right]\text{.} \end{equation*}

(a)

Show that multiplying \(A\) by \(S\) performs a scaling operation and that multiplying by \(P\) performs a row interchange.

(b)

Explain why the matrices \(S\) and \(P\) are invertible and state their inverses.