1
0
Fork 0

Updated vector spaces and added first section of linear transformations.

This commit is contained in:
Luc Bijl 2024-03-04 21:25:41 +01:00
parent fd45a4377d
commit 8ea7d3074e
3 changed files with 82 additions and 7 deletions

View file

@ -87,6 +87,7 @@ nav:
- 'Elementary matrices': mathematics/linear-algebra/matrices/elementary-matrices.md
- 'Determinants': mathematics/linear-algebra/determinants.md
- 'Vector spaces': mathematics/linear-algebra/vector-spaces.md
- 'Linear transformations': mathematics/linear-algebra/linear-transformations.md
- 'Calculus':
- 'Limits': mathematics/calculus/limits.md
- 'Continuity': mathematics/calculus/continuity.md

View file

@ -0,0 +1,61 @@
# Linear transformations
## Definition
> *Definition*: let $V$ and $W$ be vector spaces, a mapping $L: V \to W$ is a **linear transformation** or **linear map** if
>
> $$
> L(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2) = \lambda L(\mathbf{v}_1) + \mu L(\mathbf{v}_2),
> $$
>
> for all $\mathbf{v}_{1,2} \in V$ and $\lambda, \mu \in \mathbb{K}$.
In the case that the vector spaces $V$ and $W$ are the same; $V=W$, a linear transformation $L: V \to V$ will be reffered to as a **linear operator** on $V$.
## The image and kernel
Let $L: V \to W$ be a linear transformation from a vector space $V$ to a vector space $W$. In this section the effect is considered that $L$ has on subspaces of $V$. Of particular importance is the set of vectors in $V$ that get mapped into the zero vector of $W$.
> *Definition*: let $L: V \to W$ be a linear transformation. The **kernel** of $L$, denoted by $\ker(L)$, is defined by
>
> $$
> \ker(L) = \{\mathbf{v} \in V \;|\; L(\mathbf{v}) = \mathbf{0}\}.
> $$
The kernel is therefore a set consisting of vectors in $V$ that get mapped into the zero vector of $W$.
> *Definition*: let $L: V \to W$ be a linear transformation and let $S$ be a subspace of $V$. The **image** of $S$, denoted by $L(S)$, is defined by
>
> $$
> L(S) = \{\mathbf{w} \in W \;|\; \mathbf{w} = L(\mathbf{v}) \text{ for } \mathbf{v} \in S \}.
> $$
>
> The image of the entire vector space $L(V)$, is called the **range** of $L$.
With these definitions the following theorem may be posed.
> *Theorem*: if $L: V \to W$ is a linear transformation and $S$ is a subspace of $V$, then
>
> 1. $\ker(L)$ is a subspace of $V$.
> 2. $L(S)$ is a subspace of $W$.
??? note "*Proof*:"
Let $L: V \to W$ be a linear transformation and $S$ is a subspace of $V$.
To prove 1, let $\mathbf{v}_{1,2} \in \ker(L)$ and let $\lambda, \mu \in \mathbb{K}$. Then
$$
L(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2) = \lambda L(\mathbf{v}_1) + \mu L(\mathbf{v}_2) = \lambda \mathbf{0} + \mu \mathbf{0} = \mathbf{0},
$$
therefore $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2 \in \ker(L)$ and hence $\ker(L)$ is a subspace of $V$.
To prove 2, let $\mathbf{w}_{1,2} \in L(S)$ then there exist $\mathbf{v}_{1,2} \in S$ such that $\mathbf{w}_{1,2} = L(\mathbf{v}_{1,2})$ For any $\lambda, \mu \in \mathbb{K}$ we have
$$
\lambda \mathbf{w}_1 + \mu \mathbf{w}_2 = \lambda L(\mathbf{v}_1) + \mu L(\mathbf{v}_2) = L(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2),
$$
since $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2 \in S$ it follows that $\lambda \mathbf{w}_1 + \mu \mathbf{w}_2 \in L(S)$ and hence $L(S)$ is a subspace of $W$.

View file

@ -411,7 +411,8 @@ So a single nonzero vector must span one-dimension exactly. For multiple vectors
\mathbf{v} = a_1 \mathbf{v}_1 + \dots a_n \mathbf{v}_n,
$$
with
with
$$
a_i = - \frac{c_i}{c_{n+1}}
$$
@ -444,17 +445,19 @@ With the definition of a row space the following theorem may be posed.
Let $A$ and $B$ be two matrices, if $B$ is row equivalent to $A$ then $B$ can be formed from $A$ by a finite sequence of row operations. Thus the row vectors of $B$ must be linear combinations of the row vectors of $A$. Consequently, the row space of $B$ must be a subspace of the row space of $A$. Since $A$ is row equivalent to $B$, by the same reasoning, the row space of $A$ is a subspace of the row space of $B$.
> *Definition*: the **rank** of a matrix $A$, denoted as $\text{rank}(A)$, is the dimension of the row space of $A$.
The rank of a matrix may be determined by reducing the matrix to row echelon form. The nonzero rows of the row echelon matrix will form a basis for the row space. The rank may be interpreted as a measure for singularity of the matrix.
With the definition of a column space a theorem posed in [systems of linear equations](systems-of-linear-equations.md) may be restatated as.
> *Theorem*: a linear system $A \mathbf{x} = \mathbf{b}$ is consistent if and only if $\mathbf{b}$ is in the column space of $A$.
??? note "*Proof*:"
A restatement of a previous theorem in [systems of linear equations](docs/en/mathematics/linear-algebra/systems-of-linear-equations.md).
For the proof, see the initial proof in [systems of linear equations](systems-of-linear-equations.md).
> *Theorem*: let $A$ be an $m \times n$ matrix. The linear system $A \mathbf{x} = \mathbf{b}$ is consistent for every $\mathbf{b} \in \mathbb{R}^m$ if and only if the column vectors of $A$ span $\mathbb{R}^m$. The system $A \mathbf{x} = \mathbf{b}$ has at most one solution for every $\mathbf{b}$ if and only if the column vectors of $A$ are linearly independent.
With this restatement the following statements may be proposed.
> *Proposition*: let $A$ be an $m \times n$ matrix. The linear system $A \mathbf{x} = \mathbf{b}$ is consistent for every $\mathbf{b} \in \mathbb{R}^m$ if and only if the column vectors of $A$ span $\mathbb{R}^m$.
>
> The system $A \mathbf{x} = \mathbf{b}$ has at most one solution for every $\mathbf{b}$ if and only if the column vectors of $A$ are linearly independent.
??? note "*Proof*:"
@ -466,17 +469,25 @@ The rank of a matrix may be determined by reducing the matrix to row echelon for
It follows that $\mathbf{x}_1 - \mathbf{x}_2 = \mathbf{0}$ and hence $\mathbf{x}_1 = \mathbf{x}_2$.
From these propositions the following corollary emerges.
> *Corollary*: an $n \times n$ matrix $A$ is nonsingular if and only if the column vectors of $A$ form a basis for $\mathbb{R}^n$.
??? note "*Proof*:"
Let $A$ be an $m \times n$ matrix. If the column vectors of $A$ span $\mathbb{R}^m$, then $n$ must be greater or equal to $m$, since no set of fewer than $m$ vectors could span $\mathbb{R}^m$. If the columns of $A$ are linearly independent, then $n$ must be less than or equal to $m$, since every set of more than $m$ vectors in $\mathbb{R}^m$ is linearly dependent. Thus, if the column vectors of $A$ form a basis for $\mathbb{R}^m$, then $n = m$.
<br>
> *Definition*: the **rank** of a matrix $A$, denoted as $\text{rank}(A)$, is the dimension of the row space of $A$.
The rank of a matrix may be determined by reducing the matrix to row echelon form. The nonzero rows of the row echelon matrix will form a basis for the row space. The rank may be interpreted as a measure for singularity of the matrix.
> *Definition*: the **nullity** of a matrix $A$, denoted as $\text{nullity}(A)$, is the dimension of the null space of $A$.
The nullity of $A$ is the number of columns without a pivot in the reduced echelon form.
> *Theorem*: if $A$ is an $m times n$ matrix, then
> *Theorem*: if $A$ is an $m \times n$ matrix, then
>
> $$
> \text{rank}(A) + \text{nullity}(A) = n.
@ -486,6 +497,8 @@ The nullity of $A$ is the number of columns without a pivot in the reduced echel
Let $U$ be the reduced echelon form of $A$. The system $A \mathbf{x} = \mathbf{0}$ is equivalent to the system $U \mathbf{x} = \mathbf{0}$. If $A$ has rank $r$, then $U$ will have $r$ nonzero rows and consequently the system $U \mathbf{x} = \mathbf{0}$ will involve $r$ pivots and $n - r$ free variables. The dimension of the null space will equal the number of free variables.
The section of vector spaces may be finished off, with this reasonably important theorem.
> *Theorem*: if $A$ is an $m \times n$ matrix, the dimension of the row space of $A$ equals the dimension of the column space of $A$.
??? note "*Proof*:"