# Determinants ## Definition With each $n \times n$ matrix $A$ with $n \in \mathbb{N}$ it is possible to associate a scalar, the determinant of $A$ denoted by $\det (A)$ or $|A|$. > *Definition*: let $A = (a_{ij})$ be an $n \times n$ matrix and let $M_{ij}$ denote the $(n-1) \times (n-1)$ matrix obtained from $A$ by deleting the row and column containing $a_{ij}$ with $n \in \mathbb{N}$ and $(i,j) \in \{1, \dots, n\} \times \{1, \dots, n\}$. The determinant of $M_{ij}$ is called the **minor** of $a_{ij}$. We define the **cofactor** of $a_{ij}$ by > > $$ > A_{ij} = (-1)^{i+j} \det(M_{ij}). > $$ This definition is necessary to formulate a definition for the determinant, as may be observed below. > *Definition*: the **determinant** of an $n \times n$ matrix $A$ with $n \in \mathbb{N}$, denoted by $\det (A)$ or $|A|$ is a scalar associated with the matrix $A$ that is defined inductively as > > $$ > \det (A) = \begin{cases}a_{11} &\text{ if } n = 1 \\ a_{11} A_{11} + a_{12} A_{12} + \dots + a_{1n} A_{1n} &\text{ if } n > 1\end{cases} > $$ > > where > > $$ > A_{1j} = (-1)^{1+j} \det (M_{1j}) > $$ > > with $j \in \{1, \dots, n\}$ are the cofactors associated with the entries in the first row of $A$.
> *Theorem*: if $A$ is an $n \times n$ matrix with $n \in \mathbb{N} \backslash \{1\}$ then $\det(A)$ cam be expressed as a cofactor expansion using any row or column of $A$. ??? note "*Proof*:" Will be added later. We then have for a $n \times n$ matrix $A$ with $n \in \mathbb{N} \backslash \{1\}$ $$ \begin{align*} \det(A) &= a_{i1} A_{i1} + a_{i2} A_{i2} + \dots + a_{in} A_{in}, \\ &= a_{1j} A_{1j} + a_{2j} A_{2j} + \dots + a_{nj} A_{nj}, \end{align*} $$ with $i,j \in \mathbb{N}$. For example, the determinant of a $4 \times 4$ matrix $A$ given by $$ A = \begin{pmatrix} 0 & 2 & 3 & 0\\ 0 & 4 & 5 & 0\\ 0 & 1 & 0 & 3\\ 2 & 0 & 1 & 3\end{pmatrix} $$ may be determined using the definition and the theorem above $$ \det(A) = 2 \cdot (-1)^5 \det\begin{pmatrix} 2 & 3 & 0\\ 4 & 5 & 0\\ 1 & 0 & 3\end{pmatrix} = -2 \cdot 3 \cdot (-1)^6 \det\begin{pmatrix} 2 & 3 \\ 4 & 5\end{pmatrix} = 12. $$ ## Properties of determinants > *Theorem*: if $A$ is an $n \times n$ matrix then $\det (A^T) = \det (A)$. ??? note "*Proof*:" It may be observed that the result holds for $n=1$. Assume that the results holds for all $k \times k$ matrices and that $A$ is a $(k+1) \times (k+1)$ matrix for some $k \in \mathbb{N}$. Expanding $\det (A)$ along the first row of $A$ obtains $$ \det(A) = a_{11} \det(M_{11}) - a_{12} \det(M_{12}) + \dots + (-1)^{k+2} a_{1(k+1)} \det(M_{1(k+1)}), $$ since the minors are all $k \times k$ matrices it follows from the principle of natural induction that $$ \det(A) = a_{11} \det(M_{11}^T) - a_{12} \det(M_{12}^T) + \dots + (-1)^{k+2} a_{1(k+1)} \det(M_{1(k+1)}^T). $$ The right hand side of the above equation is the expansion by minors of $\det(A^T)$ using the first column of $A^T$, therefore $\det(A^T) = \det(A)$. > *Theorem*: if $A$ is an $n \times n$ triangular matrix with $n \in \mathbb{N}$, then the determinant of $A$ equals the product of the diagonal elements of $A$. ??? note "*Proof*:" Let $A$ be a $n \times n$ triagular matrix with $n \in \mathbb{N}$ given by $$ A = \begin{pmatrix} a_{11} & \cdots &a_{1n}\\ & \ddots & \vdots \\ & & a_{nn} \end{pmatrix}. $$ We claim that $\det(A) = a_{11} \cdot a_{22} \cdots a_{nn}$. We first check the claim for $n=1$ which is given by $\det(A) = a_{11}$. Now suppose for some $k \in \mathbb{N}$, the determinant of a $k \times k$ triangular $A_{k}$ is given by $$ \det(A_k) = a_1{11} \cdot a_{22} \cdots a_{kk} $$ then by assumption $$ \det(A_{k+1}) = \begin{pmatrix} A_k & a_{(k+1)1}\\& \vdots\\ 0 \cdots 0 & a_{(k+1)(k+1)}\end{pmatrix} = a_{(k+1)(k+1)} \det(A_k) + 0 = a_{11}a_1{11} \cdot a_{22} \cdots a_{kk} \cdot a_{(k+1)(k+1)}. $$ Hence if the claim holds for some $k \in \mathbb{N}$ then it also holds for $k+1$. The principle of natural induction implies now that for all $n \in \mathbb{N}$ we have $$ \det(A) = a_{11} \cdot a_{22} \cdots a_{nn}. $$ > *Theorem*: let $A$ be an $n \times n$ matrix > > 1. if $A$ has a row or column consisting entirely of zeros, then $\det(A) = 0$. > 2. if $A$ has two identical rows or two identical columns, then $\det(A) = 0$. ??? note "*Proof*:" Will be added later. > *Lemma*: let $A$ be an $n \times n$ matrix with $n \in \mathbb{N}$. If $A_{jk}$ denotes the cofactor of $a_{jk}$ for $k \in \mathbb{N}$ then > > $$ > a_{i1} A_{j1} + a_{i2} A_{j2} + \dots + a_{in} A_{jn} = \begin{cases} \det(A) &\text{ if } i = j,\\ 0 &\text{ if } i \neq j.\end{cases} > $$ ??? note "*Proof*:" If $i = j$ then we obtain the cofactor expansion of $\det(A)$ along the $i$th row of $A$. If $i \neq j$, let $A^*$ be the matrix obtained by replacing the $j$th row of $A$ by the $i$th row of $A$ $$ A^* = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ \vdots \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{pmatrix} \begin{array}{ll} j\text{th row}\\ \\ \\ \\\end{array} $$ since two rows of $A^*$ are the same its determinant must be zero. It follows from the cofactor expansion of $\det(A^*)$ along the $j$th row that $$ \begin{align*} 0 &= \det(A^*) = a_{i1} A_{j1}^* + a_{i2} A_{j2}^* + \dots + a_{in} A_{jn}^*, \\ &= a_{i1} A_{j1} + a_{i2} A_{j2} + \dots + a_{in} A_{jn}. \end{align*} $$ > *Theorem*: let $E$ be an $n \times n$ elementary matrix and $A$ an $n \times n$ matrix with $n \in \mathbb{N}$ then we have > > $$ > \det(E A) = \det(E) \det(A), > $$ > > where > > $$ > \det(E) = \begin{cases} -1 &\text{ if $E$ is of type I},\\ \alpha \in \mathbb{R}\backslash \{0\} &\text{ if $E$ is of type II},\\ 1 &\text{ if $E$ is of type III}. \end{cases} > $$ ??? note "*Proof*:" Will be added later. Similar results hold for column operations, since for the elementary matrix $E$, $E^T$ is also an elementary matrix and $\det(A E) = \det((AE)^T) = \det(E^T A^T) = \det(E^T) \det(A^T) = \det(E) \det(A)$. > *Theorem*: an $n \times n$ matrix A with $n \in \mathbb{N}$ is singular if and only if > > $$ > \det(A) = 0 > $$ ??? note "*Proof*:" Let $A$ be an $n \times n$ matrixwith $n \in \mathbb{N}$. Matrix $A$ can be reduced to row echelon form with a finite number of row operations obtaining $$ U = E_k E_{k-1} \cdots E_1 A, $$ where $U$ is in $n \times n$ row echelon form and $E_i$ are $n \times n$ elementary matrices for $i \in \{1, \dots, k\}$. It follows then that $$ \begin{align*} \det(U) &= \det(E_k E_{k-1} \cdots E_1 A), \\ &= \det(E_k) \det(E_{k-1}) \cdots \det(E_1) \det(A). \end{align*} $$ Since the determinants of the elementary matrices are all nonzero, it follows that $\det(A) = 0$ if and only if $\det(U) = 0$. If $A$ is singular then $U$ has a row consisting entirely of zeros and hence $\det(U) = 0$. If $A$ is nonsingular then $U$ is triangular with 1's along the diagonal and hence $\det(U) = 1$. From this theorem we may pose a method for computing $\det(A)$ by taking $$ \det(A) = \Big(\det(E_k) \det(E_{k-1} \cdots \det(E_1)\Big)^{-1}. $$ > *Theorem*: let $A$ and $B$ be $n \times n$ matrices with $n \in \mathbb{N}$ then > > $$ > \det(AB) = \det(A) \det(B) > $$ ??? note "*Proof*:" If $n \times n$ matrix $B$ is singular with $n \in \mathbb{N}$ then it follows that $AB$ is also singular and therefore $$ \det(AB) = 0 = \det(A) \det(B), $$ If $B$ is nonsingular, $B$ can be written as a product of elementary matrices. Therefore $$ \begin{align*} \det(AB) &= \det(A E_k \cdots E_1) &= \det(A)\det(E_k)\cdots\det(E_1) &- \det(A)\det(E_K \cdots E_1) &= \det(A)\det(B). \end{align*} $$ > *Theorem*: let $A$ be a nonsingular $n \times n$ matrix with $n \in \mathbb{N}$,then we have > > $$ > \det(A^{-1}) = \frac{1}{\det(A)}. > $$ ??? note "*Proof*:" Suppose $A$ is a nonsingular $n \times n$ matrix then $$ A^{-1} A = I, $$ and taking the determinant on both sides $$ \det(A^{-1}A) = \det(A^{-1})\det(A) = \det(I) = 1, $$ therefore $$ \det(A^{-1}) = \frac{1}{\det(A)}. $$ ## The adjoint of a matrix > *Definition*: let $A$ be an $n \times n$ matrix with $n \in \mathbb{N}$, the adjoint of $A$ is given by > > $$ > \mathrm{adj}(A) = \begin{pmatrix} A_{11} & A_{21} & \dots & A_{n1} \\ A_{12} & A_{22} & \dots & A_{n2} \\ \vdots & \vdots & \ddots & \vdots \\ A_{1n} & A_{2n} & \dots & A_{nn}\end{pmatrix} > $$ > > with $A_{ij}$ for $(i,j) \in \{1, \dots, n\} \times \{1, \dots, n\}$ the cofactors of $A$. The use of the adjoint becomes in the following theorem, that generally saves a lot of time and brain capacity. > *Theorem*: let $A$ be a nonsingular $n \times n$ matrix with $n \in \mathbb{N}$ then we have > > $$ > A^{-1} = \frac{1}{\det(A)} \text{ adj}(A). > $$ ??? note "*Proof*:" Suppose $A$ is a nonsingular $n \times n$ matrix with $n \in \mathbb{N}$, from the definition and the lemma above it follows that $$ \text{adj}(A) A= \det(A) I, $$ this may be rewritten into $$ A^{-1} = \frac{1}{\det(A)} \text{ adj}(A). $$ ## Cramer's rule > *Theorem*: let $A$ be an $n \times n$ nonsingular matrix with $n \in \mathbb{N}$ and let $\mathbf{b} \in \mathbb{R}^n$. Let $A_i$ be the matrix obtained by replacing the $i$th column of $A$ by $\mathbf{b}$. If $\mathbf{x}$ is the unique solution of $A\mathbf{x} = \mathbf{b}$ then > > $$ > x_i = \frac{\det(A_i)}{\det(A)} > $$ > > for $i \in \{1, \dots, n\}$. ??? note "*Proof*:" Let $A$ be an $n \times n$ nonsingular matrix with $n \in \mathbb{N}$ and let $\mathbf{b} \in \mathbb{R}^n$. If $\mathbf{x}$ is the unique solution of $A\mathbf{x} = \mathbf{b}$ then we have $$ \mathbf{x} = A^{-1} \mathbf{b} = \frac{1}{\det(A)} \text{ adj}(A) \mathbf{b} $$ it follows that $$ \begin{align*} x_i &= \frac{b_1 A_1i + \dots + b_n A_{ni}}{\det(A)} \\ &= \frac{\det(A_i)}{\det(A)} \end{align*} $$ for $i \in \{1, \dots, n\}$.