> *Definition*: let $A = (a_{ij})$ be an $n \times n$ matrix and let $M_{ij}$ denote the $(n-1) \times (n-1)$ matrix obtained from $A$ by deleting the row and column containing $a_{ij}$ with $n \in \mathbb{N}$ and $(i,j) \in \{1, \dots, n\} \times \{1, \dots, n\}$. The determinant of $M_{ij}$ is called the **minor** of $a_{ij}$. We define the **cofactor** of $a_{ij}$ by
This definition is necessary to formulate a definition for the determinant, as may be observed below.
> *Definition*: the **determinant** of an $n \times n$ matrix $A$ with $n \in \mathbb{N}$, denoted by $\det (A)$ or $|A|$ is a scalar associated with the matrix $A$ that is defined inductively as
>
> $$
> \det (A) = \begin{cases}a_{11} &\text{ if } n = 1 \\ a_{11} A_{11} + a_{12} A_{12} + \dots + a_{1n} A_{1n} &\text{ if } n > 1\end{cases}
> $$
>
> where
>
> $$
> A_{1j} = (-1)^{1+j} \det (M_{1j})
> $$
>
> with $j \in \{1, \dots, n\}$ are the cofactors associated with the entries in the first row of $A$.
<br>
> *Theorem*: if $A$ is an $n \times n$ matrix with $n \in \mathbb{N} \backslash \{1\}$ then $\det(A)$ cam be expressed as a cofactor expansion using any row or column of $A$.
??? note "*Proof*:"
Will be added later.
We then have for a $n \times n$ matrix $A$ with $n \in \mathbb{N} \backslash \{1\}$
> *Theorem*: if $A$ is an $n \times n$ matrix then $\det (A^T) = \det (A)$.
??? note "*Proof*:"
It may be observed that the result holds for $n=1$. Assume that the results holds for all $k \times k$ matrices and that $A$ is a $(k+1) \times (k+1)$ matrix for some $k \in \mathbb{N}$. Expanding $\det (A)$ along the first row of $A$ obtains
The right hand side of the above equation is the expansion by minors of $\det(A^T)$ using the first column of $A^T$, therefore $\det(A^T) = \det(A)$.
> *Theorem*: if $A$ is an $n \times n$ triangular matrix with $n \in \mathbb{N}$, then the determinant of $A$ equals the product of the diagonal elements of $A$.
Hence if the claim holds for some $k \in \mathbb{N}$ then it also holds for $k+1$. The principle of natural induction implies now that for all $n \in \mathbb{N}$ we have
> *Theorem*: let $E$ be an $n \times n$ elementary matrix and $A$ an $n \times n$ matrix with $n \in \mathbb{N}$ then we have
>
> $$
> \det(E A) = \det(E) \det(A),
> $$
>
> where
>
> $$
> \det(E) = \begin{cases} -1 &\text{ if $E$ is of type I},\\ \alpha \in \mathbb{R}\backslash \{0\} &\text{ if $E$ is of type II},\\ 1 &\text{ if $E$ is of type III}. \end{cases}
> $$
??? note "*Proof*:"
Will be added later.
Similar results hold for column operations, since for the elementary matrix $E$, $E^T$ is also an elementary matrix and $\det(A E) = \det((AE)^T) = \det(E^T A^T) = \det(E^T) \det(A^T) = \det(E) \det(A)$.
> *Theorem*: an $n \times n$ matrix A with $n \in \mathbb{N}$ is singular if and only if
>
> $$
> \det(A) = 0
> $$
??? note "*Proof*:"
Let $A$ be an $n \times n$ matrixwith $n \in \mathbb{N}$. Matrix $A$ can be reduced to row echelon form with a finite number of row operations obtaining
$$
U = E_k E_{k-1} \cdots E_1 A,
$$
where $U$ is in $n \times n$ row echelon form and $E_i$ are $n \times n$ elementary matrices for $i \in \{1, \dots, k\}$. It follows then that
Since the determinants of the elementary matrices are all nonzero, it follows that $\det(A) = 0$ if and only if $\det(U) = 0$. If $A$ is singular then $U$ has a row consisting entirely of zeros and hence $\det(U) = 0$. If $A$ is nonsingular then $U$ is triangular with 1's along the diagonal and hence $\det(U) = 1$.
From this theorem we may pose a method for computing $\det(A)$ by taking
> with $A_{ij}$ for $(i,j) \in \{1, \dots, n\} \times \{1, \dots, n\}$ the cofactors of $A$.
The use of the adjoint becomes in the following theorem, that generally saves a lot of time and brain capacity.
> *Theorem*: let $A$ be a nonsingular $n \times n$ matrix with $n \in \mathbb{N}$ then we have
>
> $$
> A^{-1} = \frac{1}{\det(A)} \text{ adj}(A).
> $$
??? note "*Proof*:"
Suppose $A$ is a nonsingular $n \times n$ matrix with $n \in \mathbb{N}$, from the definition and the lemma above it follows that
$$
\text{adj}(A) A= \det(A) I,
$$
this may be rewritten into
$$
A^{-1} = \frac{1}{\det(A)} \text{ adj}(A).
$$
## Cramer's rule
> *Theorem*: let $A$ be an $n \times n$ nonsingular matrix with $n \in \mathbb{N}$ and let $\mathbf{b} \in \mathbb{R}^n$. Let $A_i$ be the matrix obtained by replacing the $i$th column of $A$ by $\mathbf{b}$. If $\mathbf{x}$ is the unique solution of $A\mathbf{x} = \mathbf{b}$ then
>
> $$
> x_i = \frac{\det(A_i)}{\det(A)}
> $$
>
> for $i \in \{1, \dots, n\}$.
??? note "*Proof*:"
Let $A$ be an $n \times n$ nonsingular matrix with $n \in \mathbb{N}$ and let $\mathbf{b} \in \mathbb{R}^n$. If $\mathbf{x}$ is the unique solution of $A\mathbf{x} = \mathbf{b}$ then we have