2.4 Inverse Matrices
- Find the inverse of a matrix, if it exists.
- Use inverses to solve linear systems.
Why do we need inverse matrices? Imagine you have a secret recipe that transforms simple ingredients into a complex dish through a series of specific steps. If you want to reverse the process—starting with the finished dish and recovering the original ingredients—you need the "inverse" recipe. In mathematics, if a matrix \(A\) transforms a vector \(x\) into \(b\) via \(Ax = b\), the inverse matrix \(A^{-1}\) lets us recover \(x\) from \(b\). This is incredibly powerful in computer graphics (undoing transformations), cryptography (decoding messages), and economics (reversing input-output models).
What Is an Inverse Matrix?
An \(n \times n\) matrix \(A\) has an inverse if there exists a matrix \(B\) such that:
\[AB = BA = I_n\]where \(I_n\) is the \(n \times n\) identity matrix. The inverse of a matrix \(A\), if it exists, is denoted by the symbol \(A^{-1}\).
Think of the identity matrix \(I\) as the number 1 in regular multiplication. Just like \(5 \cdot 1 = 5\) and \(1 \cdot 5 = 5\), multiplying any matrix by the identity leaves it unchanged. The inverse matrix acts like a reciprocal: just as \(5 \cdot \frac{1}{5} = 1\), we have \(A \cdot A^{-1} = I\).
Given matrices \(A\) and \(B\) below, verify that they are inverses.
\[ \mathbf{A} = \begin{bmatrix} 4 & 1 \\ 3 & 1 \end{bmatrix} \quad \mathbf{B} = \begin{bmatrix} 1 & -1 \\ -3 & 4 \end{bmatrix} \]Example 2.4.1 Solution
The matrices are inverses if both products \(AB\) and \(BA\) equal the identity matrix of dimension \(2 \times 2\), which we denote as \(I_2\).
First, we compute \(AB\):
\[ \mathbf{AB} = \begin{bmatrix} 4 & 1 \\ 3 & 1 \end{bmatrix} \begin{bmatrix} 1 & -1 \\ -3 & 4 \end{bmatrix} = \begin{bmatrix} 4(1) + 1(-3) & 4(-1) + 1(4) \\ 3(1) + 1(-3) & 3(-1) + 1(4) \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \mathbf{I}_2 \]Next, we compute \(BA\):
\[ \mathbf{BA} = \begin{bmatrix} 1 & -1 \\ -3 & 4 \end{bmatrix} \begin{bmatrix} 4 & 1 \\ 3 & 1 \end{bmatrix} = \begin{bmatrix} 1(4) + (-1)(3) & 1(1) + (-1)(1) \\ -3(4) + 4(3) & -3(1) + 4(1) \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \mathbf{I}_2 \]Since both products equal the identity matrix, \(A\) and \(B\) are indeed inverses of each other. We can write \(B = A^{-1}\) and \(A = B^{-1}\).
Finding the Inverse of a Matrix
Find the inverse of the matrix \(\mathbf{A} = \begin{bmatrix} 3 & 1 \\ 5 & 2 \end{bmatrix}\).
Example 2.4.2 Solution
Suppose \(A\) has an inverse, and we call it \(B = \begin{bmatrix} a & b \\ c & d \end{bmatrix}\).
For \(B\) to be the inverse, we need \(AB = I_2\):
\[ \begin{bmatrix} 3 & 1 \\ 5 & 2 \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \]Multiplying the matrices on the left side:
\[ \begin{bmatrix} 3a + c & 3b + d \\ 5a + 2c & 5b + 2d \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \]Equating corresponding entries gives us four equations with four unknowns:
\[ \begin{aligned} 3a + c &= 1 \quad & 3b + d &= 0 \\ 5a + 2c &= 0 \quad & 5b + 2d &= 1 \end{aligned} \]Solving this system (using substitution or elimination), we find:
\[a = 2, \quad b = -1, \quad c = -5, \quad d = 3\]Therefore, the inverse of matrix \(A\) is:
\[\mathbf{A}^{-1} = \begin{bmatrix} 2 & -1 \\ -5 & 3 \end{bmatrix}\]Notice that finding the inverse required solving two separate systems of equations—one for the first column \((a, c)\) and one for the second column \((b, d)\). The augmented matrices for both systems are:
\[ \left[ \begin{array}{cc|c} 3 & 1 & 1 \\ 5 & 2 & 0 \end{array} \right] \quad \text{and} \quad \left[ \begin{array}{cc|c} 3 & 1 & 0 \\ 5 & 2 & 1 \end{array} \right] \]Since the coefficient matrix is identical for both systems, the row operations in the Gauss-Jordan method will be exactly the same. We can save considerable effort by combining both right-hand columns into a single augmented matrix:
\[ \left[ \begin{array}{cc|cc} 3 & 1 & 1 & 0 \\ 5 & 2 & 0 & 1 \end{array} \right] \]Performing row operations to obtain reduced row echelon form:
\[ \left[ \begin{array}{cc|cc} 1 & 0 & 2 & -1 \\ 0 & 1 & -5 & 3 \end{array} \right] \]The matrix on the right side of the vertical line is precisely \(A^{-1}\)!
The Method for Finding the Inverse of a Matrix
This observation leads us to a powerful algorithm for finding inverses:
- Write the augmented matrix \([A \mid I_n]\), placing the identity matrix next to your original matrix.
- Transform this augmented matrix into reduced row echelon form using row operations.
- If the result is \([I_n \mid B]\), then \(B\) is the inverse of \(A\).
- If the left side cannot be reduced to \(I_n\) (you get a row of zeros, for example), then the inverse does not exist.
Think of this method as "transforming \(A\) into \(I\) while simultaneously applying the same steps to \(I\)." Whatever operations turn \(A\) into the identity, when applied to the identity itself, produce the inverse. It's like having a robot that follows instructions to disassemble a machine; if you give the robot a blank blueprint and have it follow the same instructions in reverse, it draws the assembly guide!
Given the matrix \(A\), find its inverse:
\[ \mathbf{A} = \begin{bmatrix} 1 & -1 & 1 \\ 2 & 3 & 0 \\ 0 & -2 & 1 \end{bmatrix} \]Example 2.4.3 Solution
We begin by writing the augmented matrix \([A \mid I_3]\):
\[ \left[ \begin{array}{ccc|ccc} 1 & -1 & 1 & 1 & 0 & 0 \\ 2 & 3 & 0 & 0 & 1 & 0 \\ 0 & -2 & 1 & 0 & 0 & 1 \end{array} \right] \]Step 1: Multiply the first row by \(-2\) and add it to the second row (\(-2R_1 + R_2 \to R_2\)):
\[ \left[ \begin{array}{ccc|ccc} 1 & -1 & 1 & 1 & 0 & 0 \\ 0 & 5 & -2 & -2 & 1 & 0 \\ 0 & -2 & 1 & 0 & 0 & 1 \end{array} \right] \]Step 2: Swap the second and third rows (\(R_2 \leftrightarrow R_3\)) to make the pivot smaller:
\[ \left[ \begin{array}{ccc|ccc} 1 & -1 & 1 & 1 & 0 & 0 \\ 0 & -2 & 1 & 0 & 0 & 1 \\ 0 & 5 & -2 & -2 & 1 & 0 \end{array} \right] \]Step 3: Divide the second row by \(-2\) (\(R_2 \div (-2) \to R_2\)):
\[ \left[ \begin{array}{ccc|ccc} 1 & -1 & 1 & 1 & 0 & 0 \\ 0 & 1 & -1/2 & 0 & 0 & -1/2 \\ 0 & 5 & -2 & -2 & 1 & 0 \end{array} \right] \]Step 4: Eliminate above and below the pivot in column 2:
- Add row 2 to row 1 (\(R_2 + R_1 \to R_1\))
- Add \(-5\) times row 2 to row 3 (\(-5R_2 + R_3 \to R_3\))
Step 5: Multiply the third row by \(2\) (\(R_3 \times 2 \to R_3\)) to get a leading 1:
\[ \left[ \begin{array}{ccc|ccc} 1 & 0 & 1/2 & 1 & 0 & -1/2 \\ 0 & 1 & -1/2 & 0 & 0 & -1/2 \\ 0 & 0 & 1 & -4 & 2 & 5 \end{array} \right] \]Step 6: Eliminate above the pivot in column 3:
- Add \(-1/2\) times row 3 to row 2 (\(-\frac{1}{2}R_3 + R_2 \to R_2\))
- Add \(-1/2\) times row 3 to row 1 (\(-\frac{1}{2}R_3 + R_1 \to R_1\))
Therefore, the inverse of matrix \(A\) is the right-hand side:
\[\mathbf{A}^{-1} = \begin{bmatrix} 3 & -1 & -3 \\ -2 & 1 & 2 \\ -4 & 2 & 5 \end{bmatrix}\]You should always verify this result by multiplying \(A \cdot A^{-1}\) to confirm you get the identity matrix.
Using Inverses to Solve Linear Systems
Now that we know how to find the inverse of a matrix, we can use it to solve systems of equations. The method is analogous to solving a simple algebraic equation.
Solve the following equation:
\[\frac{2}{3}x = 4\]Example 2.4.4 Solution
To solve this, we multiply both sides by the multiplicative inverse of \(\frac{2}{3}\), which is \(\frac{3}{2}\):
\[ \frac{3}{2} \cdot \frac{2}{3}x = 4 \cdot \frac{3}{2} \] \[ x = 6 \]This same logic applies to matrix equations. To solve a linear system, we first express it in the matrix form \(\mathbf{AX} = \mathbf{B}\), where:
- \(A\) is the coefficient matrix
- \(X\) is the matrix of variables
- \(B\) is the matrix of constant terms
We then multiply both sides by \(A^{-1}\) on the left.
When is this useful? If you need to solve the same system multiple times with different constants (like calculating outputs for different input scenarios in an economic model), finding \(A^{-1}\) once allows you to solve each case instantly with simple matrix multiplication \(X = A^{-1}B\), rather than performing row reduction every time.
The Method for Solving a System of Equations (When a Unique Solution Exists)
- Express the system in the matrix equation \(\mathbf{AX} = \mathbf{B}\).
- Multiply both sides on the left by \(A^{-1}\): \[ \begin{aligned} \mathbf{AX} &= \mathbf{B} \\ \mathbf{A}^{-1}\mathbf{AX} &= \mathbf{A}^{-1}\mathbf{B} \\ \mathbf{IX} &= \mathbf{A}^{-1}\mathbf{B} \quad \text{(since } A^{-1}A = I\text{)} \\ \mathbf{X} &= \mathbf{A}^{-1}\mathbf{B} \end{aligned} \]
Solve the following system:
\[ \begin{aligned} 3x + y &= 3 \\ 5x + 2y &= 4 \end{aligned} \]Example 2.4.5 Solution
First, we express the system as \(\mathbf{AX} = \mathbf{B}\):
\[ \begin{bmatrix} 3 & 1 \\ 5 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 3 \\ 4 \end{bmatrix} \]Matrix \(A\) is the same matrix from Example 2.4.2, so we already know:
\[\mathbf{A}^{-1} = \begin{bmatrix} 2 & -1 \\ -5 & 3 \end{bmatrix}\]We multiply both sides of the equation on the left by \(A^{-1}\) (remember: matrix multiplication is not commutative, so we must multiply on the same side on both sides of the equation):
\[ \begin{bmatrix} 2 & -1 \\ -5 & 3 \end{bmatrix} \begin{bmatrix} 3 & 1 \\ 5 & 2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 & -1 \\ -5 & 3 \end{bmatrix} \begin{bmatrix} 3 \\ 4 \end{bmatrix} \]The product \(A^{-1}A\) gives the identity:
\[ \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2(3) + (-1)(4) \\ -5(3) + 3(4) \end{bmatrix} \] \[ \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 \\ -3 \end{bmatrix} \]Therefore, the solution is \(x = 2\) and \(y = -3\).
Solve the following system:
\[ \begin{aligned} x - y + z &= 6 \\ 2x + 3y \quad\;\, &= 1 \\ -2y + z &= 5 \end{aligned} \]Example 2.4.6 Solution
We write the system in matrix form \(\mathbf{AX} = \mathbf{B}\):
\[ \begin{bmatrix} 1 & -1 & 1 \\ 2 & 3 & 0 \\ 0 & -2 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 6 \\ 1 \\ 5 \end{bmatrix} \]This is the same matrix \(A\) from Example 2.4.3, so we use its inverse:
\[\mathbf{A}^{-1} = \begin{bmatrix} 3 & -1 & -3 \\ -2 & 1 & 2 \\ -4 & 2 & 5 \end{bmatrix}\]Multiplying both sides on the left by \(A^{-1}\):
\[ \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 3 & -1 & -3 \\ -2 & 1 & 2 \\ -4 & 2 & 5 \end{bmatrix} \begin{bmatrix} 6 \\ 1 \\ 5 \end{bmatrix} \]Computing the product:
\[ \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 3(6) + (-1)(1) + (-3)(5) \\ -2(6) + 1(1) + 2(5) \\ -4(6) + 2(1) + 5(5) \end{bmatrix} = \begin{bmatrix} 18 - 1 - 15 \\ -12 + 1 + 10 \\ -24 + 2 + 25 \end{bmatrix} = \begin{bmatrix} 2 \\ -1 \\ 3 \end{bmatrix} \]Thus, \(x = 2\), \(y = -1\), and \(z = 3\).
Not every system has a unique solution, and not every matrix has an inverse. If the reduced row echelon form of \([A \mid I]\) cannot produce \(I\) on the left side, then \(A^{-1}\) does not exist, and the system either has no solution or infinitely many solutions. In such cases, we fall back to the Gauss-Jordan method directly on the augmented matrix of the system. Think of the inverse method as a "shortcut" that only works when the system has exactly one answer.
Problem Set 2.4
Problems 1–2: Verification
Verify that the given matrices are inverses of each other by showing that their product is the identity matrix.
1. \(\begin{bmatrix} 7 & 3 \\ 2 & 1 \end{bmatrix}\) and \(\begin{bmatrix} 1 & -3 \\ -2 & 7 \end{bmatrix}\)
Problem 1 Solution
We must show that \(AB = I_2\) and \(BA = I_2\).
Let \(A = \begin{bmatrix} 7 & 3 \\ 2 & 1 \end{bmatrix}\) and \(B = \begin{bmatrix} 1 & -3 \\ -2 & 7 \end{bmatrix}\).
Step 1: Compute \(AB\):
\[AB = \begin{bmatrix} 7 & 3 \\ 2 & 1 \end{bmatrix} \begin{bmatrix} 1 & -3 \\ -2 & 7 \end{bmatrix}\]Computing each entry:
- Entry \((1,1)\): \(7(1) + 3(-2) = 7 - 6 = 1\)
- Entry \((1,2)\): \(7(-3) + 3(7) = -21 + 21 = 0\)
- Entry \((2,1)\): \(2(1) + 1(-2) = 2 - 2 = 0\)
- Entry \((2,2)\): \(2(-3) + 1(7) = -6 + 7 = 1\)
Step 2: Compute \(BA\):
\[BA = \begin{bmatrix} 1 & -3 \\ -2 & 7 \end{bmatrix} \begin{bmatrix} 7 & 3 \\ 2 & 1 \end{bmatrix}\]Computing each entry:
- Entry \((1,1)\): \(1(7) + (-3)(2) = 7 - 6 = 1\)
- Entry \((1,2)\): \(1(3) + (-3)(1) = 3 - 3 = 0\)
- Entry \((2,1)\): \(-2(7) + 7(2) = -14 + 14 = 0\)
- Entry \((2,2)\): \(-2(3) + 7(1) = -6 + 7 = 1\)
Answer: Since \(AB = BA = I_2\), the matrices are inverses of each other: \(B = A^{-1}\) and \(A = B^{-1}\).
2. \(\begin{bmatrix} 1 & -1 & 0 \\ 2 & -4 & 1 \\ 3 & -5 & 1 \end{bmatrix}\) and \(\begin{bmatrix} 1 & 1 & -1 \\ 1 & 1 & -1 \\ 2 & 2 & -2 \end{bmatrix}\)
Problem 2 Solution
Let \(A = \begin{bmatrix} 1 & -1 & 0 \\ 2 & -4 & 1 \\ 3 & -5 & 1 \end{bmatrix}\) and \(B = \begin{bmatrix} 1 & 1 & -1 \\ 1 & 1 & -1 \\ 2 & 2 & -2 \end{bmatrix}\).
Step 1: Compute \(AB\):
\[AB = \begin{bmatrix} 1 & -1 & 0 \\ 2 & -4 & 1 \\ 3 & -5 & 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & -1 \\ 1 & 1 & -1 \\ 2 & 2 & -2 \end{bmatrix}\]Row 1 of \(AB\):
- Entry \((1,1)\): \(1(1) + (-1)(1) + 0(2) = 1 - 1 + 0 = 0\)
- Entry \((1,2)\): \(1(1) + (-1)(1) + 0(2) = 1 - 1 + 0 = 0\)
- Entry \((1,3)\): \(1(-1) + (-1)(-1) + 0(-2) = -1 + 1 + 0 = 0\)
Row 2 of \(AB\):
- Entry \((2,1)\): \(2(1) + (-4)(1) + 1(2) = 2 - 4 + 2 = 0\)
- Entry \((2,2)\): \(2(1) + (-4)(1) + 1(2) = 2 - 4 + 2 = 0\)
- Entry \((2,3)\): \(2(-1) + (-4)(-1) + 1(-2) = -2 + 4 - 2 = 0\)
Row 3 of \(AB\):
- Entry \((3,1)\): \(3(1) + (-5)(1) + 1(2) = 3 - 5 + 2 = 0\)
- Entry \((3,2)\): \(3(1) + (-5)(1) + 1(2) = 3 - 5 + 2 = 0\)
- Entry \((3,3)\): \(3(-1) + (-5)(-1) + 1(-2) = -3 + 5 - 2 = 0\)
Answer: The matrices are not inverses of each other. Their product \(AB\) equals the zero matrix, not the identity matrix. In fact, \(\det(A) = 1(-4 + 5) - (-1)(2 - 3) + 0 = 1 - 1 = 0\), so matrix \(A\) is singular (non-invertible) and has no inverse at all.
Problems 3–6: Finding Inverses
Find the inverse of each matrix using the row-reduction method \([A \mid I]\).
3. \(\begin{bmatrix} 3 & -5 \\ -1 & 2 \end{bmatrix}\)
Problem 3 Solution
Find the inverse of \(A = \begin{bmatrix} 3 & -5 \\ -1 & 2 \end{bmatrix}\) by row-reducing \([A \mid I_2]\).
Step 1: Set up the augmented matrix \([A \mid I_2]\):
\[\left[ \begin{array}{cc|cc} 3 & -5 & 1 & 0 \\ -1 & 2 & 0 & 1 \end{array} \right]\]Step 2: Divide Row 1 by 3 (\(\frac{1}{3}R_1 \to R_1\)) to get a leading 1:
\[\left[ \begin{array}{cc|cc} 1 & -5/3 & 1/3 & 0 \\ -1 & 2 & 0 & 1 \end{array} \right]\]Step 3: Add Row 1 to Row 2 (\(R_1 + R_2 \to R_2\)) to eliminate the \(-1\) below the pivot:
\[\left[ \begin{array}{cc|cc} 1 & -5/3 & 1/3 & 0 \\ 0 & 1/3 & 1/3 & 1 \end{array} \right]\]Step 4: Multiply Row 2 by 3 (\(3R_2 \to R_2\)) to get a leading 1:
\[\left[ \begin{array}{cc|cc} 1 & -5/3 & 1/3 & 0 \\ 0 & 1 & 1 & 3 \end{array} \right]\]Step 5: Add \(\frac{5}{3}\) times Row 2 to Row 1 (\(\frac{5}{3}R_2 + R_1 \to R_1\)) to eliminate above the pivot:
- Row 1: \([1,\; -5/3 + 5/3,\; 1/3 + 5/3,\; 0 + 5] = [1,\; 0,\; 2,\; 5]\)
The left side is \(I_2\), so the right side is \(A^{-1}\).
Answer: \(A^{-1} = \begin{bmatrix} 2 & 5 \\ 1 & 3 \end{bmatrix}\)
4. \(\begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{bmatrix}\)
Problem 4 Solution
Find the inverse of \(A = \begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{bmatrix}\) by row-reducing \([A \mid I_3]\).
Step 1: Set up the augmented matrix \([A \mid I_3]\):
\[\left[ \begin{array}{ccc|ccc} 1 & 0 & 2 & 1 & 0 & 0 \\ 0 & 1 & 4 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \end{array} \right]\]Notice that \(A\) is already in row echelon form with leading 1s on the diagonal. We only need to eliminate the entries above the pivots (back-substitution).
Step 2: Eliminate the 4 in position \((2,3)\) by subtracting 4 times Row 3 from Row 2 (\(R_2 - 4R_3 \to R_2\)):
- Row 2: \([0, 1, 4-4, 0, 1, 0-4] = [0, 1, 0, 0, 1, -4]\)
Step 3: Eliminate the 2 in position \((1,3)\) by subtracting 2 times Row 3 from Row 1 (\(R_1 - 2R_3 \to R_1\)):
- Row 1: \([1, 0, 2-2, 1, 0, 0-2] = [1, 0, 0, 1, 0, -2]\)
The left side is \(I_3\), so the right side is \(A^{-1}\).
Answer: \(A^{-1} = \begin{bmatrix} 1 & 0 & -2 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix}\)
5. \(\begin{bmatrix} 1 & 1 & -1 \\ 1 & 0 & 1 \\ 2 & 1 & 1 \end{bmatrix}\)
Problem 5 Solution
Find the inverse of \(A = \begin{bmatrix} 1 & 1 & -1 \\ 1 & 0 & 1 \\ 2 & 1 & 1 \end{bmatrix}\) by row-reducing \([A \mid I_3]\).
Step 1: Set up the augmented matrix \([A \mid I_3]\):
\[\left[ \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 \\ 2 & 1 & 1 & 0 & 0 & 1 \end{array} \right]\]Step 2: Eliminate below the pivot in column 1.
- \(R_2 - R_1 \to R_2\): \([1-1,\; 0-1,\; 1-(-1),\; 0-1,\; 1-0,\; 0-0] = [0,\; -1,\; 2,\; -1,\; 1,\; 0]\)
- \(R_3 - 2R_1 \to R_3\): \([2-2,\; 1-2,\; 1-(-2),\; 0-2,\; 0-0,\; 1-0] = [0,\; -1,\; 3,\; -2,\; 0,\; 1]\)
Step 3: Multiply Row 2 by \(-1\) (\(-R_2 \to R_2\)) to get a leading 1:
\[\left[ \begin{array}{ccc|ccc} 1 & 1 & -1 & 1 & 0 & 0 \\ 0 & 1 & -2 & 1 & -1 & 0 \\ 0 & -1 & 3 & -2 & 0 & 1 \end{array} \right]\]Step 4: Eliminate above and below the pivot in column 2.
- \(R_3 + R_2 \to R_3\): \([0,\; -1+1,\; 3-2,\; -2+1,\; 0-1,\; 1+0] = [0,\; 0,\; 1,\; -1,\; -1,\; 1]\)
- \(R_1 - R_2 \to R_1\): \([1,\; 1-1,\; -1-(-2),\; 1-1,\; 0-(-1),\; 0-0] = [1,\; 0,\; 1,\; 0,\; 1,\; 0]\)
Step 5: The pivot in column 3 is already 1. Eliminate above it.
- \(R_2 + 2R_3 \to R_2\): \([0,\; 1,\; -2+2,\; 1-2,\; -1-2,\; 0+2] = [0,\; 1,\; 0,\; -1,\; -3,\; 2]\)
- \(R_1 - R_3 \to R_1\): \([1,\; 0,\; 1-1,\; 0-(-1),\; 1-(-1),\; 0-1] = [1,\; 0,\; 0,\; 1,\; 2,\; -1]\)
The left side is \(I_3\), so the right side is \(A^{-1}\).
Answer: \(A^{-1} = \begin{bmatrix} 1 & 2 & -1 \\ -1 & -3 & 2 \\ -1 & -1 & 1 \end{bmatrix}\)
6. \(\begin{bmatrix} 1 & 1 & 1 \\ 3 & 1 & 0 \\ 1 & 1 & 2 \end{bmatrix}\)
Problem 6 Solution
Find the inverse of \(A = \begin{bmatrix} 1 & 1 & 1 \\ 3 & 1 & 0 \\ 1 & 1 & 2 \end{bmatrix}\) by row-reducing \([A \mid I_3]\).
Step 1: Set up the augmented matrix \([A \mid I_3]\):
\[\left[ \begin{array}{ccc|ccc} 1 & 1 & 1 & 1 & 0 & 0 \\ 3 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 2 & 0 & 0 & 1 \end{array} \right]\]Step 2: Eliminate below the pivot in column 1.
- \(R_2 - 3R_1 \to R_2\): \([3-3,\; 1-3,\; 0-3,\; 0-3,\; 1-0,\; 0-0] = [0,\; -2,\; -3,\; -3,\; 1,\; 0]\)
- \(R_3 - R_1 \to R_3\): \([1-1,\; 1-1,\; 2-1,\; 0-1,\; 0-0,\; 1-0] = [0,\; 0,\; 1,\; -1,\; 0,\; 1]\)
Step 3: Divide Row 2 by \(-2\) (\(R_2 \div (-2) \to R_2\)) to get a leading 1:
\[\left[ \begin{array}{ccc|ccc} 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 1 & 3/2 & 3/2 & -1/2 & 0 \\ 0 & 0 & 1 & -1 & 0 & 1 \end{array} \right]\]Step 4: Eliminate the \(3/2\) in position \((2,3)\) using Row 3 (\(R_2 - \frac{3}{2}R_3 \to R_2\)):
- Row 2: \([0,\; 1,\; 3/2 - 3/2,\; 3/2 - (-3/2),\; -1/2 - 0,\; 0 - 3/2]\)
- \(= [0,\; 1,\; 0,\; 3,\; -1/2,\; -3/2]\)
Step 5: Eliminate the 1 in position \((1,3)\) using Row 3 (\(R_1 - R_3 \to R_1\)):
- Row 1: \([1,\; 1,\; 1-1,\; 1-(-1),\; 0-0,\; 0-1] = [1,\; 1,\; 0,\; 2,\; 0,\; -1]\)
Step 6: Eliminate the 1 in position \((1,2)\) using Row 2 (\(R_1 - R_2 \to R_1\)):
- Row 1: \([1,\; 1-1,\; 0,\; 2-3,\; 0-(-1/2),\; -1-(-3/2)]\)
- \(= [1,\; 0,\; 0,\; -1,\; 1/2,\; 1/2]\)
The left side is \(I_3\), so the right side is \(A^{-1}\).
Answer: \(A^{-1} = \begin{bmatrix} -1 & 1/2 & 1/2 \\ 3 & -1/2 & -3/2 \\ -1 & 0 & 1 \end{bmatrix}\)
Problems 7–10: Solving Systems
Express each system as \(\mathbf{AX} = \mathbf{B}\), then solve using the matrix inverses found in problems 3–6.
7. \(\begin{aligned} 3x - 5y &= 2 \\ -x + 2y &= 0 \end{aligned}\)
Problem 7 Solution
Step 1: Express the system in matrix form \(\mathbf{AX} = \mathbf{B}\).
The coefficient matrix is the same matrix from Problem 3:
\[\underbrace{\begin{bmatrix} 3 & -5 \\ -1 & 2 \end{bmatrix}}_{A} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 2 \\ 0 \end{bmatrix}\]Step 2: Multiply both sides on the left by \(A^{-1}\) from Problem 3:
\[\begin{bmatrix} x \\ y \end{bmatrix} = A^{-1}B = \begin{bmatrix} 2 & 5 \\ 1 & 3 \end{bmatrix} \begin{bmatrix} 2 \\ 0 \end{bmatrix}\]Step 3: Compute the matrix-vector product:
- \(x = 2(2) + 5(0) = 4\)
- \(y = 1(2) + 3(0) = 2\)
Answer: \(x = 4\), \(y = 2\)
8. \(\begin{aligned} x + 2z &= 8 \\ y + 4z &= 8 \\ z &= 3 \end{aligned}\)
Problem 8 Solution
Step 1: Express the system in matrix form \(\mathbf{AX} = \mathbf{B}\).
The coefficient matrix is the same matrix from Problem 4:
\[\underbrace{\begin{bmatrix} 1 & 0 & 2 \\ 0 & 1 & 4 \\ 0 & 0 & 1 \end{bmatrix}}_{A} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 8 \\ 8 \\ 3 \end{bmatrix}\]Step 2: Multiply both sides on the left by \(A^{-1}\) from Problem 4:
\[\begin{bmatrix} x \\ y \\ z \end{bmatrix} = A^{-1}B = \begin{bmatrix} 1 & 0 & -2 \\ 0 & 1 & -4 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 8 \\ 8 \\ 3 \end{bmatrix}\]Step 3: Compute the matrix-vector product:
- \(x = 1(8) + 0(8) + (-2)(3) = 8 - 6 = 2\)
- \(y = 0(8) + 1(8) + (-4)(3) = 8 - 12 = -4\)
- \(z = 0(8) + 0(8) + 1(3) = 3\)
Answer: \(x = 2\), \(y = -4\), \(z = 3\)
9. \(\begin{aligned} x + y - z &= 2 \\ x + \quad\;\, z &= 7 \\ 2x + y + z &= 13 \end{aligned}\)
Problem 9 Solution
Step 1: Express the system in matrix form \(\mathbf{AX} = \mathbf{B}\).
The coefficient matrix is the same matrix from Problem 5:
\[\underbrace{\begin{bmatrix} 1 & 1 & -1 \\ 1 & 0 & 1 \\ 2 & 1 & 1 \end{bmatrix}}_{A} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 2 \\ 7 \\ 13 \end{bmatrix}\]Step 2: Multiply both sides on the left by \(A^{-1}\) from Problem 5:
\[\begin{bmatrix} x \\ y \\ z \end{bmatrix} = A^{-1}B = \begin{bmatrix} 1 & 2 & -1 \\ -1 & -3 & 2 \\ -1 & -1 & 1 \end{bmatrix} \begin{bmatrix} 2 \\ 7 \\ 13 \end{bmatrix}\]Step 3: Compute the matrix-vector product:
- \(x = 1(2) + 2(7) + (-1)(13) = 2 + 14 - 13 = 3\)
- \(y = (-1)(2) + (-3)(7) + 2(13) = -2 - 21 + 26 = 3\)
- \(z = (-1)(2) + (-1)(7) + 1(13) = -2 - 7 + 13 = 4\)
Answer: \(x = 3\), \(y = 3\), \(z = 4\)
10. \(\begin{aligned} x + y + z &= 2 \\ 3x + y \quad\;\, &= 7 \\ x + y + 2z &= 3 \end{aligned}\)
Problem 10 Solution
Step 1: Express the system in matrix form \(\mathbf{AX} = \mathbf{B}\).
The coefficient matrix is the same matrix from Problem 6:
\[\underbrace{\begin{bmatrix} 1 & 1 & 1 \\ 3 & 1 & 0 \\ 1 & 1 & 2 \end{bmatrix}}_{A} \begin{bmatrix} x \\ y \\ z \end{bmatrix} = \begin{bmatrix} 2 \\ 7 \\ 3 \end{bmatrix}\]Step 2: Multiply both sides on the left by \(A^{-1}\) from Problem 6:
\[\begin{bmatrix} x \\ y \\ z \end{bmatrix} = A^{-1}B = \begin{bmatrix} -1 & 1/2 & 1/2 \\ 3 & -1/2 & -3/2 \\ -1 & 0 & 1 \end{bmatrix} \begin{bmatrix} 2 \\ 7 \\ 3 \end{bmatrix}\]Step 3: Compute the matrix-vector product:
- \(x = (-1)(2) + \frac{1}{2}(7) + \frac{1}{2}(3) = -2 + \frac{7}{2} + \frac{3}{2} = -2 + 5 = 3\)
- \(y = 3(2) + (-\frac{1}{2})(7) + (-\frac{3}{2})(3) = 6 - \frac{7}{2} - \frac{9}{2} = 6 - 8 = -2\)
- \(z = (-1)(2) + 0(7) + 1(3) = -2 + 0 + 3 = 1\)
Answer: \(x = 3\), \(y = -2\), \(z = 1\)
Problems 11–12: Conceptual Questions
11. Why is it necessary that a matrix be a square matrix for its inverse to exist? Explain by relating the matrix to a system of equations.
Problem 11 Solution
A matrix must be square (\(n \times n\)) for its inverse to exist for several interconnected reasons:
Reason 1 — The Definition Requires It:
The inverse of \(A\) is a matrix \(A^{-1}\) such that \(AA^{-1} = A^{-1}A = I\). For \(AA^{-1}\) to be defined, if \(A\) is \(m \times n\), then \(A^{-1}\) must be \(n \times k\) for some \(k\), giving an \(m \times k\) result. For \(A^{-1}A\) to also be defined, \(A^{-1}\) must be \(k \times m\), giving a \(k \times m\) result. For both products to equal the same identity matrix, we need \(m \times k = k \times m = I\), which forces \(m = k\) and \(k = m\). Combined with \(A^{-1}\) being both \(n \times k\) and \(k \times m\), this requires \(m = n\). So \(A\) must be square.
Reason 2 — Connection to Systems of Equations:
A matrix \(A\) represents the coefficients of a system of equations. If \(A\) is \(m \times n\):
- \(m\) = number of equations
- \(n\) = number of unknowns
If \(m \neq n\), the system is either overdetermined (more equations than unknowns, \(m > n\)) or underdetermined (fewer equations than unknowns, \(m < n\)). Neither type can guarantee a unique solution for every right-hand side \(B\), which is exactly what the inverse provides. Only when \(m = n\) (and the matrix has full rank) can the system \(AX = B\) have a unique solution for every \(B\), and that unique solution is given by \(X = A^{-1}B\).
Reason 3 — Reversibility of Transformations:
The inverse represents "undoing" a transformation. A square matrix maps \(\mathbb{R}^n \to \mathbb{R}^n\) (same space to same space), so the reverse map goes from \(\mathbb{R}^n \to \mathbb{R}^n\) as well. A non-square matrix maps between spaces of different dimensions, and such a mapping cannot be perfectly reversed — information is either lost (dimensions decrease) or ambiguity is introduced (dimensions increase).
Answer: A matrix must be square because the definition \(AB = BA = I\) requires both products to exist and produce the same-size identity, which is only possible when the matrix has equal numbers of rows and columns. Equivalently, only square systems can have a unique solution for every right-hand side, which is the capability an inverse provides.
12. Suppose we are solving a system \(\mathbf{AX} = \mathbf{B}\) by the matrix inverse method, but discover \(A\) has no inverse. How else can we solve this system? What can be said about the solutions of this system?
Problem 12 Solution
Part 1 — Alternative Method:
If \(A\) has no inverse, we can still solve the system using Gauss-Jordan elimination (row reduction) applied directly to the augmented matrix \([A \mid B]\). This method works regardless of whether \(A\) is invertible.
Step 1: Form the augmented matrix by placing \(B\) as an extra column: \([A \mid B]\).
Step 2: Apply row operations to reduce the left side as far as possible toward row echelon form (or reduced row echelon form).
Step 3: Read the solution from the resulting matrix, or determine that no solution exists.
Part 2 — What Can Be Said About the Solutions:
If \(A\) has no inverse (i.e., \(A\) is singular, meaning \(\det(A) = 0\)), then the system \(AX = B\) has no unique solution. Exactly one of two situations occurs:
Case 1: No Solution (Inconsistent System)
During row reduction, you obtain a row of the form \([0 \; 0 \; \cdots \; 0 \mid c]\) where \(c \neq 0\). This represents the equation \(0 = c\), which is a contradiction. Geometrically, the planes (or lines) represented by the equations do not all share a common intersection point — they are parallel or otherwise non-intersecting.
Case 2: Infinitely Many Solutions (Dependent System)
Row reduction produces one or more rows of all zeros \([0 \; 0 \; \cdots \; 0 \mid 0]\), and no contradictory rows. This means at least one variable is a free variable that can take any value. The solution is expressed in parametric form. Geometrically, the planes intersect along a line (one free variable) or a plane (two free variables), giving infinitely many intersection points.
Answer: Use Gauss-Jordan elimination on \([A \mid B]\). The system either has no solution or infinitely many solutions — it cannot have exactly one solution (since that would require \(A\) to be invertible).