# Relations
## Binary relations
> *Definition*: a binary relation $R$ between the sets $S$ and $T$ is a subset of the Cartesian product $S \times T$.
>
> * If $(a,b) \in R$ then $a$ is in relation $R$ to $b$, denoted by $aRb$.
> * The set $S$ is called the domain of the relation $R$ and the set $T$ the codomain.
> * If $S=T$ then $R$ is a relation on $S$.
> * This definition can be expanded to n-ary relations.
> *Definition*: let $R$ be a relation from a set $S$ to a set $T$. Then for each element $a \in S$ we define $[a]_R$ to be the set
>
> $$
> [a]_R := \{b \in T \;|\; aRb\}.
> $$
>
> This set is called the ($R$-) image of $a$.
>
> For $b \in T$ the set
>
> $$
> _R[b] := \{a \in S \;|\; aRb\}
> $$
>
> is called the ($R$-) pre-image of $B$ or $R$-fiber of $b$.
Relations between finite sets can be described using matrices.
> *Definition*: if $S = \{s_1, \dots, s_n\}$ and $T = \{t_1, \dots, t_m\}$ are finite sets and $R \subseteq S \times T$ is a binary relation, then the adjacency matrix $A_R$ of the relation $R$ is the $n \times n$ matrix whose rows are indexed by $S$ and columns by $T$ defined by
>
> $$
> A_{s,t} = \begin{cases} 1 &\text{ if } (s,t) \in R, \\ 0 &\text{ otherwise}. \end{cases}
> $$
For example, the adjacency matrix of relation $\leq$ on the set $\{1,2,3,4,5\}$ is the upper triangular matrix
$$
\begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 0 & 1\end{pmatrix}
$$
Some relations have special properties
> *Definitions*: let $R$ be a relation on a set $S$. Then $R$ is called
>
> * *Reflexive* if $\forall x \in S$ there is $(x,x) \in R$.
> * *Irreflexive* if $\forall x \in S$ there is $(x,x) \notin R$.
> * *Symmetric* if $\forall x,y \in S$ there is that $xRy \implies yRx$.
> * *Antisymmetric* if $\forall x,y \in S$ there is that $xRy \land yRx \implies x = y$.
> * *Transitive* if $\forall x,y,z \in S$ there is that $xRy \land yRz \implies xRz$.
## Equivalence relations
> *Definition*: a relation $R$ on a set $S$ is called an equivalence relation on $S$ if and only if it is reflexive, symmetric and transitive.
> *Lemma*: let $R$ be an equivalence relation on a set $S$. If $b \in [a]_R$, then $[b]_R = [a]_R$.
??? note "*Proof*:"
Suppose $b \in [a]_R$, therefore $aRb$. If $c \in [b]_R$, then $bRc$ and as $aRb$ there is transitivity $aRc$. In particular $[b]_R \subseteq [a]_R$. By symmetry of $R$, $aRb \implies bRa$ and hence $a \in [b]_R$, obtaining $[a]_R \subseteq [b]_R$.
> *Definition*: let $R$ be an equivalence relation on a set $S$. Then the sets $[s]_R$ where $s \in S$ are called the $R$-equivalence classes on $S$. The set of $R$-equivalence classes is denoted by $S/R$.
> *Theorem*: let $R$ be an equivalence relation on a set $S$. Then the set $S/R$ of $R$-equivalence classes partitions the set $S$.
??? note "*Proof*:"
Let $\Pi_R$ be the set of $R$-equivalence classes. Then by reflexivity of $R$ we find that each element $a \in S$ is inside the class $[a]_R$ of $\Pi_R$. If an element $a \in S$ is in the classes $[b]_R$ and $[c]_R$ of $\Pi_R$, then by the previous lemma we find $[b]_R = [a]_R$ and $[c]_R = [a]_R$. Then $[b]_R = [c]_R$, therefore each element $a \in S$ is inside a unique member of $\Pi_R$, which therefore is a partition of $S$.
## Composition of relations
If $R_1$ and $R_2$ are two relations between a set $S$ and $T$, new relations can be formed between $S$ and $T$ by taking the intersection $R_1 \cap R_2$, the union $R_1 \cup R_2$ or the complement $R_1 \backslash R_2$. Furthermore a relation $R^\top$ from $T$ to $S$ can be considered as the relation $\{(t,s) \in T \times S \;|\; (s,t) \in R\}$ and the identity relation from $T$ to $S$ is given by $I = \{(s, t) \in S \times T \;|\; s = t\}$
Another way of making new relations out of existing ones is by taking the composition.
> *Definition*: if $R_1$ is a relation between $S$ and $T$ and $R_2$ is a relation between $T$ and $U$ then the composition $R = R_1;R_2$ is the relation between $S$ and $U$ defined by $sRu$ for $s \in S$ and $u \in U$, if and only if there is a $t \in T$ with $sR_1t$ and $tR_2u$.
> *Proposition*: suppose $R_1$ is relation from $S$ to $T$, $R_2$ a relation from $T$ to $U$ and $R_3$ a relation from $U$ to $V$. Then $R_1;(R_2;R_3) = (R_1;R_2);R_3$. Composing relations is associative.
??? note "*Proof*:"
Suppose $s \in S$ and $v \in V$ with $sR_1;(R_2;R_3)v$. Then a $t \in T$ with $sR_1t$ and $t(R_2;R_3)v$ can be found. Then there is also a $u \in U$ with $tR_2u$ and $uR_3v$. For this $u$ there is $sR_1;R_2u$ and $uR_3v$ and hence $s(R_1;R_2);R_3v$.
Similarly, if $s \in S$ and $v \in V$ with $s(R_1;R_2);R_3v$. Then a $u \in U$ with $s(R_1;R_2)u$ and $uR_3v$ can be found. Then there is also a $t \in T$ with $sR_1t$ and $tR_2u$. For this $t$ there is $tR_2;R_3u$ and $sR_1t$ and hence $sR_1;(R_2;R_3)v$.
## Transitive closure
> *Lemma*: let $\ell$ be a collection of relations $R$ on a set $S$. If all relations $R$ in $\ell$ are transitive, reflexive or symmetric then the relation $\bigcap_{R \in \ell} R$ is also transitive, reflexive or symmetric respectively.
??? note "*Proof*:"
Let $\bar R = \bigcap_{R \in \ell} R$. Suppose all members of $\ell$ are transitive. Then for all $a,b,c \in S$ with $a \bar R b$ and $b \bar R c$ there is $aRb$ and $bRc$ for all $R \in \ell$. Thus by transitivity of each $R \in \ell$ there is also $aRc$ for each $R \in \ell$. Thus there is $a \bar R c$. Hence $\bar R$ is also transitive.
Proof for symmetric relation will follow.
Proof for reflexive relation will follow.
The above lemma makes it possible to define the reflexive, symmetric or transitive closure of a relation $R$ on a set $S$. It is the smallest reflexive, symmetric or transitive relation containing $R$.
For example suppose $R = \{(1,2), (2,2), (2,3), (5,4)\}$ is a relation on $S = \{1, 2, 3, 4, 5\}$.
: The reflexive closure of $R$ is then the relation
$$
\big\{(1,1), (1,2), (2,2), (2,3), (3,3), (4,4), (5,5), (5,4) \big\},
$$
the symmetric closure of $R$ is then the relation
$$
\big\{ (1,2), (2,1), (2,2), (2,3), (3,2), (4,5), (5,4) \big\},
$$
and the transitive clusure of $R$ is then the relation
$$
\{(1,2), (1,3), (2,2), (2,3), (5,4)\}.
$$
It may be observed that the reflexive closure of $R$ equals the relation $I \cup R$ and the symmetric closure equals $R \cup R^\top$. For the transitive closure there is:
> *Proposition*: $\bigcup_{n > 0} R^n$ is the transitive closure of the relation $R$ on a set $S$.
??? note "*Proof*:"
Define $\bar R = \bigcup_{n>0} R^n$, to show that $\bar R$ is the least transitive relation containing $R$, $\bar R$ must contain $R$, must be transitive and must be the smallest set with both of those properties.
Since $R \subseteq \bar R$, $\bar R$ contains all of the $R^i, i \in \mathbb{N}$, so in particular $\bar R$ contains $R$.
If $(s_1, s_2), (s_2, s_3) \in \bar R$, then $(s_1, s_2) \in R^j$ and $(s_2, s_3) \in R^k$ for some $j,k$. Since composition is [associative](#composition-of-relations), $R^{j+k} = R^j ; R^k$ and hence $(s_1, s_3) \in R^{j+k} \subseteq \bar R$.
We claim that if $T$ is any transitive relation containing $R$, then $\bar R \subseteq T$. By taking $R^n \subseteq \bar R \subseteq T \; \forall n \in \mathbb{N}$ .
: We first check for $n=1$
$$
R^1 = R \subseteq T.
$$
: Now suppose that for some $k \in \mathbb{N}$ we have $R^k \subseteq T$. Then by assumption $R^{k+1} \subseteq T$. Let $(s_1, s_3) \in R^{k+1} = R^k ; R$, then $(s_1, s_2) \in R$ and $(s_2, s_3) \in R^k$ for some $s_2$. Hence $(s_1, s_2), (s_2, s_3) \in T$ and by transitivity of $T$, $(s_1, s_3) \in T$.
Hence if the claim holds for some $k \in \mathbb{N}$ then it also holds for $k+1$. The principle of natural induction implies now that $\forall n \in \mathbb{N}$ we have $R^n \subseteq \bar R \subseteq T$.
Suppose a relation $R$ on a finite set $S$ of size $n$ is given by its adjacency matrix $A_R$. Then Warshall's algorithm is an method for finding the adjacency matrix of the transitive closure of the relation $R$.
> *Algorithm* **- Warshall's algoritm**: for an adjacency matrix $A_R = M_0$ of relation $R$ on $n$ elements there will be $n$ steps taken to obtain the adjacency matrix of the transitive closure of the relation $R$. Let $R_i$ and $C_i$ be the $i$th row and column of $A_R$. In each step a new matrix $M_i$ is obtained with $C_i \times R_i$ added to $M_{i-1}$. After $n$ steps $A_{\bar R}$ is obtained.
For example let $R$ be an relation on $S = \{1,2,3,4\}$ with $R = \{(2,1), (2,3), (3,1), (3,4), (4,1), (4,3)\}$, determining the transitive closure $\bar R$ of $R$ with Warshall's algorithm.
: The adjacency matrix of the relation $R$ is given by
$$
A_R = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 0\end{pmatrix}.
$$
We have $C_1 = \{2,3,4\}$ and $R_1 = \varnothing$, therefore $C_1 \times R_1 = \varnothing$ and no additions will be made, $M_1 = A_R$.
We have $C_2 = \varnothing$ and $R_2 = \{1,3\}, therefore $C_2 \times R_2 = \varnothing$ and no additions will be made, $M_2 = M_1$.
We have $C_3 = \{2,4\}$ and $R_3 = \{1,4\}$, therefore $C_3 \times R_3 = \{(2,1), (2,4), (4,1), (4,4)\}$ obtaining the matrix
$$
M_3 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1\end{pmatrix}.
$$
We have $C_4 = \{2,3,4\} and $R_4 = \{1,3,4\}, therefore $C_4 \times R_4 = \{(2,1), (2,3), (2,4), (3,1), (3,3), (3,4), (4,1), (4,3), (4,4)\}$ obtaining the final matrix
$$
M_4 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 0 & 1 & 1\end{pmatrix} = A_{\bar R}.
$$