1
0
Fork 0
mathematics-physics-wiki/docs/en/mathematics/linear-algebra/linear-transformations.md

4.8 KiB

Linear transformations

Definition

Definition: let V and W be vector spaces, a mapping L: V \to W is a linear transformation or linear map if

L(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2) = \lambda L(\mathbf{v}_1) + \mu L(\mathbf{v}_2),

for all \mathbf{v}_{1,2} \in V and \lambda, \mu \in \mathbb{K}.

A linear transformation may also be called a vector space homomorphism. If the linear transformation is a bijection then it may be called a linear isomorphism.

In the case that the vector spaces V and W are the same; V=W, a linear transformation L: V \to V will be referred to as a linear operator on V or linear endomorphism .

The image and kernel

Let L: V \to W be a linear transformation from a vector space V to a vector space W. In this section the effect is considered that L has on subspaces of V. Of particular importance is the set of vectors in V that get mapped into the zero vector of W.

Definition: let L: V \to W be a linear transformation. The kernel of L, denoted by \ker(L), is defined by

\ker(L) = {\mathbf{v} \in V ;|; L(\mathbf{v}) = \mathbf{0}}.

The kernel is therefore a set consisting of vectors in V that get mapped into the zero vector of W.

Definition: let L: V \to W be a linear transformation and let S be a subspace of V. The image of S, denoted by L(S), is defined by

L(S) = {\mathbf{w} \in W ;|; \mathbf{w} = L(\mathbf{v}) \text{ for } \mathbf{v} \in S }.

The image of the entire vector space L(V), is called the range of L.

With these definitions the following theorem may be posed.

Theorem: if L: V \to W is a linear transformation and S is a subspace of V, then

  1. \ker(L) is a subspace of V.
  2. L(S) is a subspace of W.

??? note "Proof:"

Let $L: V \to W$ be a linear transformation and $S$ is a subspace of $V$. 

To prove 1, let $\mathbf{v}_{1,2} \in \ker(L)$ and let $\lambda, \mu \in \mathbb{K}$. Then

$$
    L(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2) = \lambda L(\mathbf{v}_1) + \mu L(\mathbf{v}_2) = \lambda \mathbf{0} + \mu \mathbf{0} = \mathbf{0},
$$

therefore $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2 \in \ker(L)$ and hence $\ker(L)$ is a subspace of $V$. 

To prove 2, let $\mathbf{w}_{1,2} \in L(S)$ then there exist $\mathbf{v}_{1,2} \in S$ such that $\mathbf{w}_{1,2} = L(\mathbf{v}_{1,2})$ For any $\lambda, \mu \in \mathbb{K}$ we have

$$
    \lambda \mathbf{w}_1 + \mu \mathbf{w}_2 = \lambda L(\mathbf{v}_1) + \mu L(\mathbf{v}_2) = L(\lambda \mathbf{v}_1 + \mu \mathbf{v}_2),
$$

since $\lambda \mathbf{v}_1 + \mu \mathbf{v}_2 \in S$ it follows that $\lambda \mathbf{w}_1 + \mu \mathbf{w}_2 \in L(S)$ and hence $L(S)$ is a subspace of $W$. 

Matrix representations

Theorem: let L: \mathbb{R}^n \to \mathbb{R}^m be a linear transformation, then there is an m \times n matrix A such that

L(\mathbf{x}) = A \mathbf{x},

for all x \in \mathbb{R}^n. With the $i$th column vector of A given by

\mathbf{a}_i = L(\mathbf{e}_i),

for a basis \{\mathbf{e}_1, \dots, \mathbf{e}_n\} \subset \mathbb{R}^n and i \in \{1, \dots, n\}.

??? note "Proof:"

For $i \in \{1, \dots, n\}$, define

$$
    \mathbf{a}_i = L(\mathbf{e}_i),
$$

and let

$$
    A = (\mathbf{a}_1, \dots, \mathbf{a}_n).
$$

If $\mathbf{x} = x_1 \mathbf{e}_1 + \dots + x_n \mathbf{e}_n$ is an arbitrary element of $\mathbb{R}^n$, then

$$
\begin{align*}
    L(\mathbf{x}) &= x_1 L(\mathbf{e}_1) + \dots + x_n L(\mathbf{e}_n), \\
                &= x_1 \mathbf{a}_1 + \dots + x_n \mathbf{a}_n, \\
                &= A \mathbf{x}.
\end{align*}
$$

It has therefore been established that each linear transformation from \mathbb{R}^n to \mathbb{R}^m can be represented in terms of an m \times n matrix.

Theorem: let E = \{\mathbf{e}_1, \dots, \mathbf{e}_n\} and F = \{\mathbf{f}_1, \dots, \mathbf{f}_n\} be two ordered bases for a vector space V, and let L: V \to V be a linear operator on V, \dim V = n \in \mathbb{N}. Let S be the n \times n transition matrix representing the change from F to E, \mathbf{e}_i = S \mathbf{f}_i,

for i \in \mathbb{N}; i\leq n.

If A is the matrix representing L with respect to E, and B is the matrix representing L with respect to F, then

B = S^{-1} A S.

??? note "Proof:"

Will be added later.

Definition: let A and B be n \times n matrices. B is said to be similar to A if there exists a nonsingular matrix S such that B = S^{-1} A S.

It follows from the above theorem that if A and B are n \times n matrices representing the same operator L, then A and B are similar.