diff --git a/docs/en/physics/mathematical-physics/vector-analysis/curvilinear-coordinates.md b/docs/en/physics/mathematical-physics/vector-analysis/curvilinear-coordinates.md
index e69de29..2cb55b7 100644
--- a/docs/en/physics/mathematical-physics/vector-analysis/curvilinear-coordinates.md
+++ b/docs/en/physics/mathematical-physics/vector-analysis/curvilinear-coordinates.md
@@ -0,0 +1,299 @@
+# Curvilinear coordinate systems
+
+In this section curvilinear coordinate systems will be presented, these are coordinate systems that are based on a set of basis vectors that are neither orthognal nor normalized.
+
+> *Principle*: space can be equipped with a smooth and continuous coordinate net.
+
+## Covariant basis
+
+> *Definition*: consider a coordinate system $(x_1, x_2, x_3)$ that is defined by the function $\mathbf{x}: \mathbb{R}^3 \to \mathbb{R}^3$. Producing a position vector for every combination of coordinate values.
+>
+> * For two coordinates fixed, a coordinate curve is obtained.
+> * For one coordinate fixed, a coordinate surface is obtained.
+
+We will now use this coordinate system described as $\mathbf{x}$ to formulate a set of basis vectors.
+
+> *Definition*: for a valid coordinate system $\mathbf{x}$ a set of linearly independent covariant (local) basis vectors can be described by
+>
+> $$
+> \mathbf{a}_i(x_1, x_2, x_3) := \partial_i \mathbf{x}(x_1, x_2, x_3),
+> $$
+>
+> for all $(x_1, x_2, x_3) \in \mathbb{R}^3$ and $i \in \{1, 2, 3\}$.
+
+Obtaining basis vectors that are tangential to the corresponding coordinate curves. Therefore any vector $\mathbf{u} \in \mathbb{3}$ can be written in terms of its components with respect to this basis
+
+$$
+ \mathbf{u} = \sum_{i=1}^3 u_i \mathbf{a}_i
+$$
+
+with $u_{1,2,3} \in \mathbb{R}$ the components.
+
+> *Definition*: the Einstein summation convention omits the summation symbol and is defined by
+>
+> $$
+> \mathbf{u} = \sum_{i=1}^3 u_i \mathbf{a}_i = u^i \mathbf{a}_i,
+> $$
+>
+> with $u^{1,2,3} \in \mathbb{R}$ the contravariant components. The definition states that
+>
+> 1. When an index appears twice in a product, one as a subscript and once as a superscript, summation over that index is implied.
+> 2. A superscript that appears in denominator counts as a subscript.
+
+This convention makes writing summation a lot easier, though one may see it as a little unorthodox.
+
+## The metric tensor
+
+> *Definition*: for two vectors $\mathbf{u}$ and $\mathbf{v}$ in $\mathbb{R}^3$ that are represented in terms of a covariant basis, the scalar product is given by
+>
+> $$
+> \langle \mathbf{u}, \mathbf{v} \rangle = u^i v^j \langle \mathbf{a}_i, \mathbf{a}_j \rangle = u^i v^j g_{ij},
+> $$
+>
+> with $g_{ij}$ the components of a structure that is called the metric tensor given by
+>
+> $$
+> (g_{ij}) := \begin{pmatrix} \langle \mathbf{a}_1, \mathbf{a}_1 \rangle & \langle \mathbf{a}_1, \mathbf{a}_2 \rangle & \langle \mathbf{a}_1, \mathbf{a}_3 \rangle \\ \langle \mathbf{a}_2, \mathbf{a}_1 \rangle & \langle \mathbf{a}_2, \mathbf{a}_2 \rangle & \langle \mathbf{a}_2, \mathbf{a}_3 \rangle \\ \langle \mathbf{a}_3, \mathbf{a}_1 \rangle & \langle \mathbf{a}_3, \mathbf{a}_2 \rangle & \langle \mathbf{a}_3, \mathbf{a}_3 \rangle \end{pmatrix}.
+> $$
+
+For the special case of an orthogonal set of basis vectors, all of-diagonal elements are zero and we have a metric tensor $g_{ij}$ given by
+
+$$
+ (g_{ij}) = \begin{pmatrix} \langle \mathbf{a}_1, \mathbf{a}_1 \rangle & & \\ & \langle \mathbf{a}_2, \mathbf{a}_2 \rangle & \\ & & \langle \mathbf{a}_3, \mathbf{a}_3 \rangle\end{pmatrix} = \begin{pmatrix} h_1^2 & & \\ & h_2^2 & \\ & & h_3^2\end{pmatrix},
+$$
+
+with $h_i = \sqrt{\langle \mathbf{a}_i, \mathbf{a}_i \rangle} = \|\mathbf{a}_i\|$ the scale factors for $i \in \{1, 2, 3\}$.
+
+> *Theorem*: the determinant of the metric tensor $g := \det(g_{ij})$ can be written as the square of the scalar triple product of the covariant basis vectors
+>
+> $$
+> g = \langle \mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3 \rangle^2.
+> $$
+
+??? note "*Proof*:"
+
+ Will be added later.
+
+
+
+> *Corollary*: consider a covariant basis and the infinitesimal coordinate transformations $(dx_1, dx_2, dx_3)$ spanned by the covariant basis then the volume defined by these infinitesimal coordinate transformations is given by
+>
+> $$
+> \begin{align*}
+> dV &= \langle dx_1 \mathbf{a}_2, dx_2 \mathbf{a}_1, dx_3 \mathbf{a}_3 \rangle, \\
+> &= \sqrt{g} dx_1 dx_2 dx_3,
+> \end{align*}
+> $$
+>
+> by definition of the scalar triple product. For a function $f: \mathbb{R}^3 \to \mathbb{R}$ its integral in the domain $D \subseteq \mathbb{R}^3$ with $D = [a_1, b_1] \times [a_2, b_2] \times [a_3, b_3]$ and $a_i, b_i \in \mathbb{R}$ for $i \in \{1, 2, 3\}$ closed may be given by
+>
+> $$
+> \int_D f(x_1, x_2, x_3)dV = \int_{a_1}^{b_1} \int_{a_2}^{b_2} \int_{a_3}^{b_3} f(x_1, x_2, x_3) \sqrt{g} dx_1 dx_2 dx_3.
+> $$
+
+??? note "*Proof*:"
+
+ Will be added later.
+
+## Contravariant basis
+
+The covariant basis vectors have been constructed as tangential vectors of the coordinate curves. An alternative basis can be constructed from vectors that are perpendicular to coordinate surfaces.
+
+> *Definition*: for a valid set of covariant basis vectors the contravariant basis vectors may be defined given by
+>
+> $$
+> \begin{align*}
+> \mathbf{a}^1 &:= \frac{1}{\sqrt{g}} (\mathbf{a}_2 \times \mathbf{a}_3), \\
+> \mathbf{a}^2 &:= \frac{1}{\sqrt{g}} (\mathbf{a}_3 \times \mathbf{a}_1), \\
+> \mathbf{a}^3 &:= \frac{1}{\sqrt{g}} (\mathbf{a}_1 \times \mathbf{a}_2)
+> \end{align*}
+> $$
+
+From this definition it follows that $\langle \mathbf{a}^i, \mathbf{a}_j \rangle = \delta_j^i$, with $\delta_j^i$ the Kronecker delta defined by
+
+> *Definition*: the Kronecker delta $\delta_{ij}$ is defined as
+>
+> $$
+> \delta_{ij} = \begin{cases} 1 &\text{ if } i = j, \\ 0 &\text{ if } i \neq j.\end{cases}
+> $$
+
+Also a metric tensor for contravariant basis vectors can be defined with it the relations between covariant and contravariant quantities can be found.
+
+> *Definition*: the components of the metric tensor for contravariant basis vectors are defined as
+>
+> $$
+> g^{ij} := \langle \mathbf{a}^i, \mathbf{a}^j \rangle,
+> $$
+> therefore the metric tensor for contravariant basis vectors is given by
+>
+> $$
+> (g^{ij}) = \begin{pmatrix} \langle \mathbf{a}^1, \mathbf{a}^1 \rangle & \langle \mathbf{a}^1, \mathbf{a}^2 \rangle & \langle \mathbf{a}^1, \mathbf{a}^3 \rangle \\ \langle \mathbf{a}^2, \mathbf{a}^1 \rangle & \langle \mathbf{a}^2, \mathbf{a}^2 \rangle & \langle \mathbf{a}^2, \mathbf{a}^3 \rangle \\ \langle \mathbf{a}^3, \mathbf{a}^1 \rangle & \langle \mathbf{a}^3, \mathbf{a}^2 \rangle & \langle \mathbf{a}^3, \mathbf{a}^3 \rangle \end{pmatrix}.
+> $$
+
+
+
+> *Lemma*: considering the two ways of representing the vector $\mathbf{u} \in \mathbb{R}^3$ given by
+>
+> $$
+> \mathbf{u} = u^i \mathbf{a}_i = u_i \mathbf{a}^i.
+> $$
+>
+> From the definitions given above the relations between the covariant and contravariant quantities of the vector $\mathbf{u}$ have been found to be
+>
+> $$
+> u_i = g_{ij} u^j, \qquad \mathbf{a}_i = g_{ij} \mathbf{a}^j,
+> $$
+>
+> $$
+> u^i = g^{ij} u_j, \qquad \mathbf{a}^i = g^{ij} \mathbf{a}_j.
+> $$
+
+??? note "*Proof*:"
+
+ Will be added later.
+
+By combining the expressions for the components a relation can be established between $g_{ij}$ and $g^{ij}$.
+
+> *Theorem*: the components of the metric tensor for covariant and contravariant basis vectors are related by
+>
+> $$
+> g_{ij} g^{jk} = \delta_i^k.
+> $$
+
+??? note "*Proof*:"
+
+ Will be added later.
+
+This is the index notation for $(g_{ij})(g^{ij}) = I$, with $I$ the identity matrix, therefore we have
+
+$$
+ (g^{ij}) = (g_{ij})^{-1},
+$$
+
+concluding that both matrices are nonsingular.
+
+> *Corollary*: let $\mathbf{u} \in \mathbb{R}^3$ be a vector, for orthogonal basis vectors it follows that the covariant and contravariant basis vectors are proportional by
+>
+> $$
+> \mathbf{a}^i = \frac{1}{h_i^2} \mathbf{a}_i,
+> $$
+>
+> and for the components of $\mathbf{u}$ we have
+>
+> $$
+> u^i = \frac{1}{h_i^2} u_i,
+> $$
+>
+> for all $i \in \{1, 2, 3\}$.
+
+??? note "*Proof*:"
+
+ Will be added later.
+
+Therefore it also follows that for the special case of orthogonal basis vectors the metric tensor for contrariant basis vectors $(g^{ij})$ is given by
+
+$$
+ (g^{ij}) = \begin{pmatrix} \langle \mathbf{a}^1, \mathbf{a}^1 \rangle & & \\ & \langle \mathbf{a}^2, \mathbf{a}^2 \rangle & \\ & & \langle \mathbf{a}^3, \mathbf{a}^3 \rangle\end{pmatrix} = \begin{pmatrix} \frac{1}{h_1^2} & & \\ & \frac{1}{h_2^2} & \\ & & \frac{1}{h_3^2}\end{pmatrix},
+$$
+
+with $h_i = \sqrt{\langle \mathbf{a}_i, \mathbf{a}_i \rangle} = \|\mathbf{a}_i\|$ the scale factors for $i \in \{1, 2, 3\}$.
+
+## Phyiscal components
+
+ A third representation of vectors uses physical components and normalized basis vectors.
+
+> *Definition*: from the above corollary the physical component representation for a vector $\mathbf{u} \in \mathbb{R}^3$ can be defined as
+>
+> $$
+> \mathbf{e}_{(i)} := h_i \mathbf{a}^i = \frac{1}{h_i} \mathbf{a}_i,
+> $$
+>
+> $$
+> u_{(i)} := h_i u^i = \frac{1}{h_i} u_i,
+> $$
+>
+> for all $i \in \{1, 2, 3\}$.
+
+Contributing to the physical component representation given by
+
+$$
+ \mathbf{u} = u^{(i)} \mathbf{e}_{(i)},
+$$
+
+for $i \in \{1, 2, 3\}$.
+
+> *Proposition*: obtaining the properties
+>
+> $$
+> \langle \mathbf{e}_{(i)}, \mathbf{e}_{(i)} \rangle = \frac{1}{h_i^2} \langle \mathbf{a}_i, \mathbf{a}_i \rangle = 1,
+> $$
+>
+> and for vectors $\mathbf{u}, \mathbf{v} \in \mathbb{R}^3$ we have
+>
+> $$
+> \langle \mathbf{u}, \mathbf{v} \rangle = u^{(i)} v_{(i)}.
+> $$
+
+??? note "*Proof*:"
+
+ Will be added later.
+
+In particular the length of a vector $\mathbf{u} \in \mathbb{R}^3$ can then be determined by
+
+$$
+ \|\mathbf{u}\| = \sqrt{u^{(i)} u_{(i)}}.
+$$
+
+We will discuss as an example the representations of the cartesian, cylindrical and spherical coordinate systems viewed from a cartesian perspective. This means that the coordinate maps are based on the cartesian interpretation of then. Every other interpretation could have been used, but our brains have a preference for cartesian it seems.
+
+Let $\mathbf{x}: \mathbb{R}^3 \to \mathbb{R}^3$ map a cartesian coordinate system given by
+
+$$
+ \mathbf{x}(x,y,z) = \begin{pmatrix} x \\ y \\ z\end{pmatrix},
+$$
+
+then we have the covariant basis vectors given by
+
+$$
+ \mathbf{a}_i(x,y,z) = \partial_i \mathbf{x}(x,y,z),
+$$
+
+obtaining $\mathbf{a}_1 = \begin{pmatrix} 1 \\ 0 \\ 0\end{pmatrix}, \mathbf{a}_2 = \begin{pmatrix} 0 \\ 1 \\ 0\end{pmatrix}, \mathbf{a}_3 = \begin{pmatrix} 0 \\ 0 \\ 1\end{pmatrix}$.
+
+It may be observed that this set of basis vectors is orthogonal. Therefore the scaling factors are given by $h_1 = 1, h_2 = 1, h_3 = 1$ as to be expected for the reference.
+
+
+
+Let $\mathbf{x}: \mathbb{R}^3 \to \mathbb{R}^3$ map a cylindrical coordinate system given by
+
+$$
+ \mathbf{x}(r,\theta,z) = \begin{pmatrix} r \cos \theta \\ r \sin \theta \\ z\end{pmatrix},
+$$
+
+then we have the covariant basis vectors given by
+
+$$
+ \mathbf{a}_i(r,\theta,z) = \partial_i \mathbf{x}(r,\theta,z),
+$$
+
+obtaining $\mathbf{a}_1(\theta) = \begin{pmatrix} \cos \theta \\ \sin \theta \\ 0\end{pmatrix}, \mathbf{a}_2(r, \theta) = \begin{pmatrix} -r\sin \theta \\ r \cos \theta \\ 0\end{pmatrix}, \mathbf{a}_3 = \begin{pmatrix} 0 \\ 0 \\ 1\end{pmatrix}$.
+
+It may be observed that this set of basis vectors is orthogonal. Therefore the scaling factors are given by $h_1 = 1, h_2 = r, h_3 = 1$.
+
+
+
+Let $\mathbf{x}: \mathbb{R}^3 \to \mathbb{R}^3$ map a spherical coordinate system given by
+
+$$
+ \mathbf{x}(r,\theta,\varphi) = \begin{pmatrix}r \cos \theta \sin \varphi \\ r \sin \theta \sin \varphi \\ r \cos \varphi\end{pmatrix},
+$$
+
+using the mathematical convention, then we have the covariant basis vectors given by
+
+$$
+ \mathbf{a}_i(r,\theta,\varphi) = \partial_i \mathbf{x}(r,\theta,\varphi),
+$$
+
+obtaining $\mathbf{a}_1(\theta, \varphi) = \begin{pmatrix} \cos \theta \sin \varphi \\ \sin \theta \sin \varphi\\ \cos \varphi\end{pmatrix}, \mathbf{a}_2(r, \theta, \varphi) = \begin{pmatrix} -r\sin \theta \sin \varphi \\ r \cos \theta \sin \varphi \\ 0\end{pmatrix}, \mathbf{a}_3 = \begin{pmatrix} r \cos \theta \cos \varphi \\ r \sin \theta \cos \varphi \\ - r \sin \varphi\end{pmatrix}$.
+
+It may be observed that this set of basis vectors is orthogonal. Therefore the scaling factors are given by $h_1 = 1, h_2 = r \sin \varphi, h_3 = r$.
\ No newline at end of file