If we write \(\xvec=\twovec xy\text{,}\) use the dot product to write an equation for the vectors orthogonal to \(\vvec\) in terms of \(x\) and \(y\text{.}\)
SectionΒ 3.5 introduced the column space \(\col(A)\) and null space \(\nul(A)\) of a matrix \(A\text{.}\) Suppose that \(A\) is a matrix and \(\xvec\) is a vector satisfying \(A\xvec=\zerovec\text{.}\)
Given a subspace \(W\) of \(\real^m\text{,}\) the orthogonal complement of \(W\) is the set of vectors in \(\real^m\) each of which is orthogonal to every vector in \(W\text{.}\) We denote the orthogonal complement by \(W^\perp\text{.}\)
Suppose that \(\wvec_1=\threevec10{-2}\) and \(\wvec_2=\threevec11{-1}\) form a basis for \(W\text{,}\) a two-dimensional subspace of \(\real^3\text{.}\)
Suppose that the vector \(\xvec\) is orthogonal to \(\wvec_1\text{.}\) If we write \(\xvec=\threevec{x_1}{x_2}{x_3}\text{,}\) use the fact that \(\wvec_1\cdot\xvec =
0\) to write a linear equation for \(x_1\text{,}\)\(x_2\text{,}\) and \(x_3\text{.}\)
Suppose that \(\xvec\) is also orthogonal to \(\wvec_2\text{.}\) In the same way, write a linear equation for \(x_1\text{,}\)\(x_2\text{,}\) and \(x_3\) that arises from the fact that \(\wvec_2\cdot\xvec =
0\text{.}\)
If \(\xvec\) is orthogonal to both \(\wvec_1\) and \(\wvec_2\text{,}\) these two equations give us a linear system \(B\xvec=\zerovec\) for some matrix \(B\text{.}\) Identify the matrix \(B\) and write a parametric description of the solution space to the equation \(B\xvec = \zerovec\text{.}\)
Suppose that \(\wvec_1=\threevec10{-2}\) and \(\wvec_2=\threevec11{-1}\) form a basis for \(W\text{,}\) a two-dimensional subspace of \(\real^3\text{.}\)
Since \(\wvec_1\) and \(\wvec_2\) form a basis for the two-dimensional subspace \(W\text{,}\) any vector in \(\wvec\) in \(W\) can be written as a linear combination
If \(\xvec\) is orthogonal to both \(\wvec_1\) and \(\wvec_2\text{,}\) use the distributive property of dot products to explain why \(\xvec\) is orthogonal to \(\wvec\text{.}\)
The next activity illustrates how multiplying a vector by \(A^T\) is related to computing dot products with the columns of \(A\text{.}\) Youβll develop a better understanding of this relationship if you compute the dot products and matrix products in this activity without using technology.
Now write the matrix \(A = \begin{bmatrix} \vvec_1 \amp \vvec_2 \end{bmatrix}\) and its transpose \(A^T\text{.}\) Find the product \(A^T\wvec\) and describe how this product computes both dot products \(\vvec_1\cdot\wvec\) and \(\vvec_2\cdot\wvec\text{.}\)
Suppose that \(\xvec\) is a vector that is orthogonal to both \(\vvec_1\) and \(\vvec_2\text{.}\) What does this say about the dot products \(\vvec_1\cdot\xvec\) and \(\vvec_2\cdot\xvec\text{?}\) What does this say about the product \(A^T\xvec\text{?}\)
Remember that \(\nul(A^T)\text{,}\) the null space of \(A^T\text{,}\) is the solution set of the equation \(A^T\xvec=\zerovec\text{.}\) If \(\xvec\) is a vector in \(\nul(A^T)\text{,}\) explain why \(\xvec\) must be orthogonal to both \(\vvec_1\) and \(\vvec_2\text{.}\)
Remember that \(\col(A)\text{,}\) the column space of \(A\text{,}\) is the set of linear combinations of the columns of \(A\text{.}\) Therefore, any vector in \(\col(A)\) can be written as \(c_1\vvec_1+c_2\vvec_2\text{.}\) If \(\xvec\) is a vector in \(\nul(A^T)\text{,}\) explain why \(\xvec\) is orthogonal to every vector in \(\col(A)\text{.}\)
Suppose that \(W\) is a \(5\)-dimensional subspace of \(\real^9\) and that \(A\) is a matrix whose columns form a basis for \(W\text{;}\) that is, \(\col(A) =
W\text{.}\)