7.5 KiB
Determinants
Definition
With each n \times n
matrix A
with n \in \mathbb{N}
it is possible to associate a scalar, the determinant of A
denoted by \det (A)
or |A|
.
Definition: let
A = (a_{ij})
be ann \times n
matrix and letM_{ij}
denote the(n-1) \times (n-1)
matrix obtained fromA
by deleting the row and column containinga_{ij}
withn \in \mathbb{N}
and(i,j) \in \{1, \dots, n\} \times \{1, \dots, n\}
. The determinant ofM_{ij}
is called the minor ofa_{ij}
. We define the cofactor ofA_{ij}
ofa_{ij}
by
A_{ij} = (-1)^{i+j} \det(M_{ij}).
This definition is necessary to formulate a definition for the determinant, as may be observed below.
Definition: the determinant of an
n \times n
matrixA
withn \in \mathbb{N}
, denoted by\det (A)
or|A|
is a scalar associated with the matrixA
that is defined inductively as
\det (A) = \begin{cases}a_{11} &\text{ if } n = 1 \ a_{11} A_{11} + a_{12} A_{12} + \dots + a_{1n} A_{1n} &\text{ if } n > 1\end{cases}
where
A_{1j} = (-1)^{1+j} \det (M_{1j})
with
j \in \{1, \dots, n\}
are the cofactors associated with the entries in the first row ofA
.
Theorem: if
A
is ann \times n
matrix withn \in \mathbb{N} \backslash \{1\}
then\det(A)
cam be expressed as a cofactor expansion using any row or column ofA
.
??? note "Proof:"
Will be added later.
We then have for a n \times n
matrix A
with n \in \mathbb{N} \backslash \{1\}
\begin{align*}
\det(A) &= a_{i1} A_{i1} + a_{i2} A_{i2} + \dots + a_{in} A_{in}, \
&= a_{1j} A_{1j} + a_{2j} A_{2j} + \dots + a_{nj} A_{nj},
\end{align*}
with i,j \in \mathbb{N}
.
For example, the determinant of a 4 \times 4
matrix A
given by
A = \begin{pmatrix} 0 & 2 & 3 & 0\ 0 & 4 & 5 & 0\ 0 & 1 & 0 & 3\ 2 & 0 & 1 & 3\end{pmatrix}
may be determined using the definition and the theorem above
\det(A) = 2 \cdot (-1)^5 \det\begin{pmatrix} 2 & 3 & 0\ 4 & 5 & 0\ 1 & 0 & 3\end{pmatrix} = -2 \cdot 3 \cdot (-1)^6 \det\begin{pmatrix} 2 & 3 \ 4 & 5\end{pmatrix} = 12.
Properties of determinants
Theorem: if
A
is ann \times n
matrix then\det (A^T) = \det (A)
.
??? note "Proof:"
It may be observed that the result holds for $n=1$. Assume that the results holds for all $k \times k$ matrices and that $A$ is a $(k+1) \times (k+1)$ matrix for some $k \in \mathbb{N}$. Expanding $\det (A)$ along the first row of $A$ obtains
$$
\det(A) = a_{11} \det(M_{11}) - a_{12} \det(M_{12}) + \dots + (-1)^{k+2} a_{1(k+1)} \det(M_{1(k+1)}),
$$
since the minors are all $k \times k$ matrices it follows from the principle of natural induction that
$$
\det(A) = a_{11} \det(M_{11}^T) - a_{12} \det(M_{12}^T) + \dots + (-1)^{k+2} a_{1(k+1)} \det(M_{1(k+1)}^T).
$$
The right hand side of the above equation is the expansion by minors of $\det(A^T)$ using the first column of $A^T$, therefore $\det(A^T) = \det(A)$.
Theorem: if
A
is ann \times n
triangular matrix withn \in \mathbb{N}
, then the determinant ofA
equals the product of the diagonal elements ofA
.
??? note "Proof:"
Let $A$ be a $n \times n$ triagular matrix with $n \in \mathbb{N}$ given by
$$
A = \begin{pmatrix} a_{11} & \cdots &a_{1n}\\ & \ddots & \vdots \\ & & a_{nn} \end{pmatrix}.
$$
We claim that $\det(A) = a_{11} \cdot a_{22} \cdots a_{nn}$. We first check the claim for $n=1$ which is given by $\det(A) = a_{11}$.
Now suppose for some $k \in \mathbb{N}$, the determinant of a $k \times k$ triangular $A_{k}$ is given by
$$
\det(A_k) = a_1{11} \cdot a_{22} \cdots a_{kk}
$$
then by assumption
$$
\det(A_{k+1}) = \begin{pmatrix} A_k & a_{(k+1)1}\\& \vdots\\ 0 \cdots 0 & a_{(k+1)(k+1)}\end{pmatrix} = a_{(k+1)(k+1)} \det(A_k) + 0 = a_{11}a_1{11} \cdot a_{22} \cdots a_{kk} \cdot a_{(k+1)(k+1)}.
$$
Hence if the claim holds for some $k \in \mathbb{N}$ then it also holds for $k+1$. The principle of natural induction implies now that for all $n \in \mathbb{N}$ we have
$$
\det(A) = a_{11} \cdot a_{22} \cdots a_{nn}.
$$
Theorem: let
A
be ann \times n
matrix
- if
A
has a row or column consisting entirely of zeros, then\det(A) = 0
.- if
A
has two identical rows or two identical columns, then\det(A) = 0
.
??? note "Proof:"
Will be added later.
Lemma: let
A
be ann \times n
matrix withn \in \mathbb{N}
. IfA_{jk}
denotes the cofactor ofa_{jk}
fork \in \mathbb{N}
then
a_{i1} A_{j1} + a_{i2} A_{j2} + \dots + a_{in} A_{jn} = \begin{cases} \det(A) &\text{ if } i = j,\ 0 &\text{ if } i \neq j.\end{cases}
??? note "Proof:"
If $i = j$ then we obtain the cofactor expansion of $\det(A)$ along the $i$th row of $A$.
If $i \neq j$, let $A^*$ be the matrix obtained by replacing the $j$th row of $A$ by the $i$th row of $A$
$$
A^* = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ \vdots \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{pmatrix} \begin{array}{ll} j\text{th row}\\ \\ \\ \\\end{array}
$$
since two rows of $A^*$ are the same its determinant must be zero. It follows from the cofactor expansion of $\det(A^*)$ along the $j$th row that
$$
\begin{align*}
0 &= \det(A^*) = a_{i1} A_{j1}^* + a_{i2} A_{j2}^* + \dots + a_{in} A_{jn}^*, \\
&= a_{i1} A_{j1} + a_{i2} A_{j2} + \dots + a_{in} A_{jn}.
\end{align*}
$$
Theorem: let
E
be ann \times n
elementary matrix andA
ann \times n
matrix withn \in \mathbb{N}
then we have
\det(E A) = \det(E) \det(A),
where
\det(E) = \begin{cases} -1 &\text{ if
E
is of type I},\ \alpha \in \mathbb{R}\backslash {0} &\text{ ifE
is of type II},\ 1 &\text{ ifE
is of type III}. \end{cases}
??? note "Proof:"
Will be added later.
Similar results hold for column operations, since for the elementary matrix E
, E^T
is also an elementary matrix and \det(A E) = \det((AE)^T) = \det(E^T A^T) = \det(E^T) \det(A^T) = \det(E) \det(A)
.
Theorem: an
n \times n
matrix A withn \in \mathbb{N}
is singular if and only if
\det(A) = 0
??? note "Proof:"
Let $A$ be an $n \times n$ matrixwith $n \in \mathbb{N}$. Matrix $A$ can be reduced to row echelon form with a finite number of row operations obtaining
$$
U = E_k E_{k-1} \cdots E_1 A,
$$
where $U$ is in $n \times n$ row echelon form and $E_i$ are $n \times n$ elementary matrices for $i \in \{1, \dots, k\}$. It follows then that
$$
\begin{align*}
\det(U) &= \det(E_k E_{k-1} \cdots E_1 A), \\
&= \det(E_k) \det(E_{k-1}) \cdots \det(E_1) \det(A).
\end{align*}
$$
Since the determinants of the elementary matrices are all nonzero, it follows that $\det(A) = 0$ if and only if $\det(U) = 0$. If $A$ is singular then $U$ has a row consisting entirely of zeros and hence $\det(U) = 0$. If $A$ is nonsingular then $U$ is triangular with 1's along the diagonal and hence $\det(U) = 1$.
From this theorem we may pose a method for computing \det(A)
by taking
\det(A) = \Big(\det(E_k) \det(E_{k-1} \cdots \det(E_1)\Big)^{-1}.
Theorem: let
A
andB
ben \times n
matrices withn \in \mathbb{N}
then
\det(AB) = \det(A) \det(B)
??? note "Proof:"
Will be added later.