1
0
Fork 0

Finished section vector spaces.

This commit is contained in:
Luc Bijl 2024-03-03 22:06:40 +01:00
parent a595693e6f
commit fd45a4377d

View file

@ -171,15 +171,15 @@ therefore $\mathbf{x} + \mathbf{y} \in N(A)$ and it follows that $N(A)$ is a sub
>
> with scalars $a_1, \dots, a_n$ is called a **linear combination** of $\mathbf{v}_1, \dots, \mathbf{v}_n$.
>
> The set of all linear combinations of $\mathbf{v}_1, \dots, \mathbf{v}_n$ is called the **span** of $\mathbf{v}_1, \dots, \mathbf{v}_n$ which is denoted by $\text{Span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$.
> The set of all linear combinations of $\mathbf{v}_1, \dots, \mathbf{v}_n$ is called the **span** of $\mathbf{v}_1, \dots, \mathbf{v}_n$ which is denoted by $\text{span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$.
The nullspace can be for example defined by a span of vectors.
> *Theorem*: if $\mathbf{v}_1, \dots, \mathbf{v}_n$ are vectors in a vector space $V$ with $n \in \mathbb{N}$ then $\text{Span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ is a subspace of $V$.
> *Theorem*: if $\mathbf{v}_1, \dots, \mathbf{v}_n$ are vectors in a vector space $V$ with $n \in \mathbb{N}$ then $\text{span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ is a subspace of $V$.
??? note "*Proof*:"
Let $b$ be a scalar and $\mathbf{u} \in \text{Span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ given by
Let $b$ be a scalar and $\mathbf{u} \in \text{span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ given by
$$
a_1 \mathbf{v}_1 + \dots + a_n \mathbf{v}_n,
@ -191,9 +191,9 @@ The nullspace can be for example defined by a span of vectors.
b \mathbf{u} = (b a_1)\mathbf{v}_1 + \dots + (b a_n)\mathbf{v}_n,
$$
it follows that $b \mathbf{u} \in \text{Span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$.
it follows that $b \mathbf{u} \in \text{span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$.
If we also have $\mathbf{w} \in \text{Span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ given by
If we also have $\mathbf{w} \in \text{span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ given by
$$
b_1 \mathbf{v}_1 + \dots + b_n \mathbf{v}_n,
@ -205,9 +205,9 @@ The nullspace can be for example defined by a span of vectors.
\mathbf{u} + \mathbf{w} = (a_1 + b_1) \mathbf{v}_1 + \dots + (a_n + b_n)\mathbf{v}_n,
$$
it follows that $\mathbf{u} + \mathbf{w} \in \text{Span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ is a subspace of $V$.
it follows that $\mathbf{u} + \mathbf{w} \in \text{span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ is a subspace of $V$.
For example, a vector $\mathbf{x} \in \mathbb{R}^3$ is in $\text{Span}(\mathbf{e}_1, \mathbf{e}_2)$ if and only if it lies in the $x_1 x_2$-plane in 3-space. Thus we can think of the $x_1 x_2$-plane as the geometrical representation of the subspace $\text{Span}(\mathbf{e}_1, \mathbf{e}_2)$.
For example, a vector $\mathbf{x} \in \mathbb{R}^3$ is in $\text{span}(\mathbf{e}_1, \mathbf{e}_2)$ if and only if it lies in the $x_1 x_2$-plane in 3-space. Thus we can think of the $x_1 x_2$-plane as the geometrical representation of the subspace $\text{span}(\mathbf{e}_1, \mathbf{e}_2)$.
> *Definition*: the set $\{\mathbf{v}_1, \dots, \mathbf{v}_n\}$ with $n \in \mathbb{N}$ is a spanning set for $V$ if and only if every vector $V$ can be written as a linear combination of $\mathbf{v}_1, \dots, \mathbf{v}_n$.
@ -215,7 +215,7 @@ For example, a vector $\mathbf{x} \in \mathbb{R}^3$ is in $\text{Span}(\mathbf{e
We have the following observations.
> *Proposition*: if $\mathbf{v}_1, \dots, \mathbf{v}_n$ with $n \in \mathbb{N}$ span a vector space $V$ and one of these vectors can be written as a linear combination of the other $n-1$ vectors then thoses $n-1$ vectors span $V$.
> *Proposition*: if $\mathbf{v}_1, \dots, \mathbf{v}_n$ with $n \in \mathbb{N}$ span a vector space $V$ and one of these vectors can be written as a linear combination of the other $n-1$ vectors then those $n-1$ vectors span $V$.
??? note "*Proof*:"
@ -301,11 +301,11 @@ It follows from the above propositions that if a set of vectors is linearly depe
This result can be used to test whether $n$ vectors are linearly independent in $\mathbb{R}^n$ for $n \in \mathbb{N}$.
> *Theorem*: let $\mathbf{v}_1, \dots, \mathbf{v}_n$ be vectors in a vector space $V$ with $n \in \mathbb{N}$. A vector $\mathbf{v} \in \text{Span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ can be written uniquely as a linear combination of $\mathbf{v}_1, \dots, \mathbf{v}_n$ if and only if $\mathbf{v}_1, \dots, \mathbf{v}_n$ are linearly independent.
> *Theorem*: let $\mathbf{v}_1, \dots, \mathbf{v}_n$ be vectors in a vector space $V$ with $n \in \mathbb{N}$. A vector $\mathbf{v} \in \text{span}(\mathbf{v}_1, \dots, \mathbf{v}_n)$ can be written uniquely as a linear combination of $\mathbf{v}_1, \dots, \mathbf{v}_n$ if and only if $\mathbf{v}_1, \dots, \mathbf{v}_n$ are linearly independent.
??? note "*Proof*:"
If $\mathbf{v} \in \text{Span}(\mathbf{v}_1, \dots \mathbf{v}_n)$ with $n \in \mathbb{N}$ then $\mathbf{v}$ can be written as a linear combination
If $\mathbf{v} \in \text{span}(\mathbf{v}_1, \dots \mathbf{v}_n)$ with $n \in \mathbb{N}$ then $\mathbf{v}$ can be written as a linear combination
$$
\mathbf{v} = a_1 \mathbf{v}_1 + \dots + a_n \mathbf{v}_n.
@ -333,3 +333,161 @@ This result can be used to test whether $n$ vectors are linearly independent in
On the other hand if $\mathbf{v}_1, \dots \mathbf{v}_n$ are linearly dependent then the coefficients must not all be 0 and $a_i \neq b_i$ for some $i \in \{1, \dots, n\}$. Therefore the representation of $\mathbf{v}$ is not unique when $\mathbf{v}_1, \dots \mathbf{v}_n$ are linearly dependent.
## Basis and dimension
> *Definition*: the vectors $\mathbf{v}_1,\dots,\mathbf{v}_n \in V$ form a basis if and only if
>
> 1. $\mathbf{v}_1,\dots,\mathbf{v}_n$ are linearly independent,
> 2. $\mathbf{v}_1,\dots,\mathbf{v}_n$ span $V$.
Therefore, a basis may define a vector space, but it is not necessarily unique.
> *Theorem*: if $\{\mathbf{v}_1,\dots,\mathbf{v}_n\}$ is a spanning set for a vector space $V$, then any collection of $m$ vectors in $V$ where $m>n$, is linearly dependent.
??? note "*Proof*:"
Let $\mathbf{u}_1, \dots, \mathbf{u}_m \in V$, where $m > n$. Then since $\{\mathbf{v}_1,\dots,\mathbf{v}_n\}$ span $V$ we have
$$
\mathbf{u}_i = a_{i1} \mathbf{v}_1 + \dots + a_{in} \mathbf{v}_n,
$$
for $i,j \in \{1, \dots, n\}$ with $a_{ij} \in \mathbb{R}$.
A linear combination $c_1 \mathbf{u}_1 + \dots + c_m \mathbf{u}_m$ can be written in the form
$$
c_1 \sum_{j=1}^n a_{1j} \mathbf{v}_j + \dots + c_m \sum_{j=1}^n a_{1j} a_{mj} \mathbf{v}_j,
$$
obtaining
$$
c_1 \mathbf{u}_1 + \dots + c_m \mathbf{u}_m = \sum_{i=1}^m \bigg( c_i \sum_{j=1}^n a_{ij} \mathbf{v}_j \bigg) = \sum_{j=1}^n \bigg(\sum_{i=1}^m a_{ij} c_i \bigg) \mathbf{v}_j.
$$
Considering the system of equations
$$
\sum_{i=1}^m a_{ij} c_i = 0
$$
for $j \in \{1, \dots, n\}$, a homogeneous system with more unknowns than equations. Therefore the system must have a nontrivial solution $(\hat c_1, \dots, \hat c_m)^T$, but then
$$
\hat c_1 \mathbf{u}_1 + \dots + \hat c_m \mathbf{u}_m = \sum_{j=1}^n 0 \mathbf{v}_j = \mathbf{0},
$$
hence $\mathbf{u}_1, \dots, \mathbf{u}_m$ are linearly dependent.
> *Corollary*: if both $\{\mathbf{v}_1,\dots,\mathbf{v}_n\}$ and $\{\mathbf{u}_1,\dots,\mathbf{u}_m\}$ are bases for a vector space $V$, then $n = m$.
??? note "*Proof*:"
Let both $\{\mathbf{v}_1,\dots,\mathbf{v}_n\}$ and $\{\mathbf{u}_1,\dots,\mathbf{u}_m\}$ be bases for $V$. Since $\mathbf{v}_1,\dots,\mathbf{v}_n$ span $V$ and $\mathbf{u}_1,\dots,\mathbf{u}_m$ are linearly independent then it follows that $m \leq n$, similarly $\mathbf{u}_1,\dots,\mathbf{u}_m$ span $V$ and $\mathbf{v}_1,\dots,\mathbf{v}_n$ are linearly independent so $n \leq m$. Which must imply $n=m$.
With this result we may now refer to the number of elements in any basis for a given vector space. Which leads to the following definition.
> *Definition*: let $V$ be a vector space. If $V$ has a basis consisting of $n \in \mathbb{N}$ vectors, then $V$ has **dimension** $n$. The subspace $\{\mathbf{0}\}$ of $V$ is said to have dimension $0$. $V$ is said to be **finite dimensional** if there is a finite set of vectors that spans $V$, otherwise $V$ is **infinite dimensional**.
So a single nonzero vector must span one-dimension exactly. For multiple vectors we have the following theorem.
> *Theorem*: if $V$ is a vector space of dimension $n \in \mathbb{N}\ \backslash \{\mathbf{0}\}$, then
>
> 1. any set of $n$ linearly independent vectors spans $V$,
> 2. any $n$ vectors that span $V$ are linearly independent,
??? note "*Proof*:"
To prove 1, suppose that $\mathbf{v}_1,\dots,\mathbf{v}_n \in V$ are linearly independent and $\mathbf{v} \in V$. Since $V$ has dimension $n$, it has a basis consisting of $n$ vectors and these vectors span $V$. It follows that $\mathbf{v}_1,\dots,\mathbf{v}_n, \mathbf{v}$ must be linearly dependent. Thus there exist scalars $c_1, \dots, c_n, c_{n+1}$ not all zero, such that
$$
c_1 \mathbf{v}_1 + \dots + c_n \mathbf{v}_n + c_{n+1} \mathbf{v} = \mathbf{0}.
$$
The scalar $c_{n+1}$ cannot be zero, since that would imply that $\mathbf{v}_1,\dots,\mathbf{v}_n$ are linearly dependent, hence
$$
\mathbf{v} = a_1 \mathbf{v}_1 + \dots a_n \mathbf{v}_n,
$$
with
$$
a_i = - \frac{c_i}{c_{n+1}}
$$
for $i \in \{1, \dots, n\}$. Since $\mathbf{v}$ was an arbitrary vector in $V$ it follows that $\mathbf{v}_1, \dots, \mathbf{v}_n$ span $V$.
To prove 2, suppose that $\mathbf{v}_1,\dots,\mathbf{v}_n$ span $V$. If $\mathbf{v}_1,\dots,\mathbf{v}_n$ are linearly dependent, then one vector $\mathbf{v}_i$ can be written as a linear combination of the others, take $i=n$ without loss of generality. It follows that $\mathbf{v}_1,\dots,\mathbf{v}_{n-1}$ will still span $V$, which contradicts with $\dim V = n$, therefore $\mathbf{v}_1, \dots, \mathbf{v}_n$ must be linearly independent.
Therefore no set fewer than $n$ vectors can span $V$, if $\dim V = n$.
### Change of basis
> *Definition*: let $V$ be a vector space and let $E = \{\mathbf{e}_1, \dots \mathbf{e}_n\}$ be an ordered basis for $V$. If $\mathbf{v}$ is any element of $V$, then $\mathbf{v}$ can be written in the form
>
> $$
> \mathbf{v} = v_1 \mathbf{e}_1 + \dots + v_n \mathbf{e}_n,
> $$
>
> where $v_1, \dots, v_n \in \mathbb{R}$ are the **coordinates** of $\mathbf{v}$ relative to $E$.
## Row space and column space
> *Definition*: if $A$ is an $m \times n$ matrix, the subspace of $\mathbb{R}^{n}$ spanned by the row vectors of $A$ is called the **row space** of $A$. The subspace of $\mathbb{R}^m$ spanned by the column vectors of $A$ is called the **column space** of $A$.
With the definition of a row space the following theorem may be posed.
> *Theorem*: two row equivalent matrices have the same row space.
??? note "*Proof*:"
Let $A$ and $B$ be two matrices, if $B$ is row equivalent to $A$ then $B$ can be formed from $A$ by a finite sequence of row operations. Thus the row vectors of $B$ must be linear combinations of the row vectors of $A$. Consequently, the row space of $B$ must be a subspace of the row space of $A$. Since $A$ is row equivalent to $B$, by the same reasoning, the row space of $A$ is a subspace of the row space of $B$.
> *Definition*: the **rank** of a matrix $A$, denoted as $\text{rank}(A)$, is the dimension of the row space of $A$.
The rank of a matrix may be determined by reducing the matrix to row echelon form. The nonzero rows of the row echelon matrix will form a basis for the row space. The rank may be interpreted as a measure for singularity of the matrix.
> *Theorem*: a linear system $A \mathbf{x} = \mathbf{b}$ is consistent if and only if $\mathbf{b}$ is in the column space of $A$.
??? note "*Proof*:"
A restatement of a previous theorem in [systems of linear equations](docs/en/mathematics/linear-algebra/systems-of-linear-equations.md).
> *Theorem*: let $A$ be an $m \times n$ matrix. The linear system $A \mathbf{x} = \mathbf{b}$ is consistent for every $\mathbf{b} \in \mathbb{R}^m$ if and only if the column vectors of $A$ span $\mathbb{R}^m$. The system $A \mathbf{x} = \mathbf{b}$ has at most one solution for every $\mathbf{b}$ if and only if the column vectors of $A$ are linearly independent.
??? note "*Proof*:"
Let $A$ be an $m \times n$ matrix. It follows that $A \mathbf{x} = \mathbf{b}$ will be consistent for every $\mathbf{b} \in \mathbb{R}^m$ if and only if the column vectors of $A$ span $\mathbb{R}^m$. To prove the second statement, the system $A \mathbf{x} = \mathbf{0}$ can have only the trivial solution and hence the column vectors of $A$ must be linearly independent. Conversely, if the column vectors of $A$ are linearly independent, $A \mathbf{x} = \mathbf{0}$ has only the trivial solution. If $\mathbf{x}_1, \mathbf{x}_2$ were both solutions of $A \mathbf{x} = \mathbf{b}$ then $\mathbf{x}_1 - \mathbf{x}_2$ would be a solution of $A \mathbf{x} = \mathbf{0}$
$$
A(\mathbf{x}_1 - \mathbf{x}_2) = A\mathbf{x}_1 - A\mathbf{x}_2 = \mathbf{b} - \mathbf{b} = \mathbf{0}.
$$
It follows that $\mathbf{x}_1 - \mathbf{x}_2 = \mathbf{0}$ and hence $\mathbf{x}_1 = \mathbf{x}_2$.
> *Corollary*: an $n \times n$ matrix $A$ is nonsingular if and only if the column vectors of $A$ form a basis for $\mathbb{R}^n$.
??? note "*Proof*:"
Let $A$ be an $m \times n$ matrix. If the column vectors of $A$ span $\mathbb{R}^m$, then $n$ must be greater or equal to $m$, since no set of fewer than $m$ vectors could span $\mathbb{R}^m$. If the columns of $A$ are linearly independent, then $n$ must be less than or equal to $m$, since every set of more than $m$ vectors in $\mathbb{R}^m$ is linearly dependent. Thus, if the column vectors of $A$ form a basis for $\mathbb{R}^m$, then $n = m$.
> *Definition*: the **nullity** of a matrix $A$, denoted as $\text{nullity}(A)$, is the dimension of the null space of $A$.
The nullity of $A$ is the number of columns without a pivot in the reduced echelon form.
> *Theorem*: if $A$ is an $m times n$ matrix, then
>
> $$
> \text{rank}(A) + \text{nullity}(A) = n.
> $$
??? note "*Proof*:"
Let $U$ be the reduced echelon form of $A$. The system $A \mathbf{x} = \mathbf{0}$ is equivalent to the system $U \mathbf{x} = \mathbf{0}$. If $A$ has rank $r$, then $U$ will have $r$ nonzero rows and consequently the system $U \mathbf{x} = \mathbf{0}$ will involve $r$ pivots and $n - r$ free variables. The dimension of the null space will equal the number of free variables.
> *Theorem*: if $A$ is an $m \times n$ matrix, the dimension of the row space of $A$ equals the dimension of the column space of $A$.
??? note "*Proof*:"
Will be added later.