1
0
Fork 0
mathematics-physics-wiki/docs/en/mathematics/linear-algebra/matrices/elementary-matrices.md

69 lines
No EOL
3.5 KiB
Markdown

# Elementary matrices
> *Definition*: an *elementary* matrix is defined as an identity matrix with exactly one elementary row operation undergone.
>
> 1. An elementary matrix of type 1 $E_1$ is obtained by changing two rows $I$.
> 2. An elementary matrix of type 2 $E_2$ is obtained by multiplying a row of $I$ by a nonzero constant.
> 3. An elementary matrix of type 3 $E_3$ is obtained from $I$ by adding a multiple of one row to another row.
For example the elementary matrices could be given by
$$
E_1 = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1\end{pmatrix}, \qquad E_2 = \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 3\end{pmatrix}, \qquad E_3 = \begin{pmatrix}1 & 0 & 3\\ 0 & 1 & 0\\ 0 & 0 & 1\end{pmatrix}.
$$
> *Theorem*: if $E$ is an elementary matrix, then $E$ is nonsingular and $E^{-1}$ is an elementary matrix of the same type.
??? note "*Proof*:"
If $E$ is the elementary matrix of type 1 formed from $I$ by interchanging the $i$th and $j$th rows, then $E$ can be transfomred back into $I$ by interchanging these same rows again. Therefore, $EE = I$ and hence $E$ is its own inverse.
IF $E$ is the elementray matrix of type 2 formed by multiplying the $i$th row of $I$ by a nonzero scalar $\alpha$ then $E$ can be transformed into the identity matrix by multiplying either its $i$th row or its $i$th column by $1/\alpha$.
If $E$ is the elemtary matrix of type 3 formed from $I$ by adding $m$ times the $i$th row to the $j$th row then $E$ can be transformed back into $I$ either by subtracting $m$ times the $i$th row from the $j$th row or by subtracting $m$ times the $j$th column from the $i$th column.
> *Definition*: a matrix $B$ is **row equivalent** to a matrix $A$ if there exists a finite sequence $E_1, E_2, \dots, E_K$ of elementary matrices with $k \in \mathbb{N}$ such that
>
> $$
> B = E_k E_{k-1} \cdots E_1 A.
> $$
It may be observed that row equivalence is a reflexive, symmetric and transitive relation.
> *Theorem*: let $A$ be an $n \times n$ matrix, the following are equivalent
>
> 1. $A$ is nonsingular,
> 2. $A\mathbf{x} = \mathbf{0}$ has only the trivial solution $\mathbf{0}$,
> 3. $A$ is row equivalent to $I$.
??? note "*Proof*:"
Let $A$ be a nonsingular $n \times n$ matrix and $\mathbf{\hat x}$ is a solution of $A \mathbf{x} = \mathbf{0}$ then
$$
\mathbf{\hat x} = I \mathbf{\hat x} = (A^{-1} A)\mathbf{\hat x} = A^{-1} (A \mathbf{\hat x}) = A^{-1} \mathbf{0} = \mathbf{0}.
$$
Let $U$ be the row echelon form of $A$. If one of the diagonal elements of $U$ were 0, the last row of $U$ would consist entirely of zeros. But then $A \mathbf{x} = \mathbf{0}$ would have a nontrivial solution. Thus $U$ must be a strictly triangular matrix with diagonal elements all equal to 1. It then follows that $I$ is the reduced row echelon form of $A$ and hence $A$ is row equivalent to $I$.
If $A$ is row equivalent to $I$ there exists elementary matrices $E_1, E_2, \dots, E_k$ with $k \in \mathbb{N}$ such that
$$
A = E_k E_{k-1} \cdots E_1 I = E_k E_{k-1} \cdots E_1.
$$
Since $E_i$ is invertible for $i \in \{1, \dots, k\}$ the product $E_k E_{k-1} \cdots E_1$ is also invertible, hence $A$ is nonsingular.
If $A$ is nonsingular then $A$ is row equivalent to $I$ and hence there exists elemtary matrices $E_1, \dots, E_k$ such that
$$
E_k E_{k-1} \cdots E_1 A = I,
$$
multiplyting both sides on the right by $A^{-1}$ obtains
$$
E_k E_{k-1} \cdots E_1 = A^{-1}
$$
a method for computing $A^{-1}$.