# Markov Processes

### From bio-physics-wiki

Random events in simple probability theory are usually independent of each other. Considering a die, the probability to roll a $6$ is independent of the number rolled before. Thus the probability of a sequence of events satisfies the multiplicative property. Markovian Processes represent a generalisation of these independent processes by allowing the future state to depend on the current state. Therefore, we introduce the conditional probability that, when the current state at time $t_1$ is given by the enseble of varibles $\{a_1 \}$, the state $\{ a_2 \}$ at time $t_2$ is reached.
\begin{align}
P_{1|1}( \{a_2 \}t_2 | \{ a_1\}t_1)
\end{align}
$P_{1|1}$ is called a *stochastic matrix*, its entries are the *transition probabilities* from $\{a_1 \}t_1$ to $\{ a_2\}t_2$. $P_{1|1}$ has the property that the sum over the transition probabilities of each row gives
\begin{align}
\sum_{ \{a_2 \}} P_{1|1}( \{a_2 \}t_2 | \{ a_1\}t_1)=1
\end{align}

**Definition**

In a Markov Process the current probability distribution $P_1(\{ a_1\}t_1)$ and the stochastic Matrix $P_{1|1}( \{a_2 \}t_2 | \{ a_1\}t_1)$ uniquely define the probability distribution $P_2(\{a_2 \}t_2;\{ a_1\}t_1)$. In other words $P_{1|1}$ contains all the necessary information of the process. Note, the 'memory' of the system is restricted to only one step backwards.
\begin{align}
P_2(\{a_2 \}t_2;\{ a_1\}t_1)&=P_{1|1}(\{a_2 \}t_2|\{ a_1\}t_1) \cdot P_1(\{ a_1\}t_1)
\end{align}

**Example:**
To illustrate this definition let us discuss the Random Walk problem, which in its essence is a Markovian Process. For simplicity we assume that only $N=10$ discrete positions $i=1,2,\dots ,10$ are available for the molecule and devide the distance $a$ (where the molecule is allowed to walk) by $N-1=9$. At time $t_1$ the particle is placed at a certain initial position characterised by $\{a_1\}$. The particle can then jump to the left with probability $p$ and to the right with probability $q$. We assume that once the particle reaches position $0$ or $a$ it is absorbed by the boundary and cannot leave the position anymore. The matrix $P_{1|1}$ is then given by
\begin{align}
P_{1|1}= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
p & 0 & q & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & p & 0 & q & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & p & 0 & q & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & p & 0 & q & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & p & 0 & q & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & p & 0 & q & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & p & 0 & q & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & p & 0 & q \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{pmatrix}
\end{align}
Say one particle sits initially at position $i=5$, then $\{a_1\}$ is given by $\{0,0,0,0,1,0,0,0,0,0\}$ and the probability distribution $P_1(\{ a_1\}t_1)=(0,0,0,0,1,0,0,0,0,0)^T$, so the particle sits at $i=5$ with probability $1$. Given this information we can calculate $P_2(\{a_2 \}t_2;\{ a_1\}t_1)$.
\begin{align}
\begin{pmatrix} 0 \\ 0\\ 0\\ p\\ 0\\ q\\ 0\\ 0\\ 0\\ 0 \end{pmatrix}= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
p & 0 & q & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & p & 0 & q & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & p & 0 & q & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & p & 0 & q & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & p & 0 & q & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & p & 0 & q & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & p & 0 & q & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & p & 0 & q \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{pmatrix} \begin{pmatrix} 0 \\ 0\\ 0\\ 0\\ 1\\ 0\\ 0\\ 0\\ 0\\ 0 \end{pmatrix}
\end{align}
At a later time step $t_3$ the transition probability $P_3(\{a_3 \}t_3;\{a_2 \}t_2;\{ a_1\}t_1)$ is given by
\begin{align}
\begin{pmatrix} 0 \\ 0\\ pq\\ 0\\ p^2+q^2\\ 0\\ qp\\ 0\\ 0\\ 0 \end{pmatrix}= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
p & 0 & q & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & p & 0 & q & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & p & 0 & q & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & p & 0 & q & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & p & 0 & q & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & p & 0 & q & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & p & 0 & q & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & p & 0 & q \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
\end{pmatrix} \begin{pmatrix} 0 \\ 0\\ 0\\ p\\ 0\\ q\\ 0\\ 0\\ 0\\ 0 \end{pmatrix} \tag{1}
\end{align}
Notice, that by the Binomial Theorem the elements of $P_n$ always sum up to $1$. In a more abstract notation we can write (1) as
\begin{align}
P_3(\{a_3 \}t_3;\{a_2 \}t_2;\{ a_1\}t_1)&= P_{1|1}(\{a_3 \}t_3 | \{ a_2\}t_2 ) \cdot P_{1|1}(\{a_2 \}t_2 | \{ a_1\}t_1 ) \cdot P_1(\{ a_1\}t_1) \\
&= P_{2|1}(\{a_3 \}t_3|\{ a_2\}t_2;\{ a_1\}t_1| ) \cdot P_2(\{a_2 \}t_2;\{ a_1\}t_1)
\end{align}
we recognise the following property of Markov Processes

\begin{align} P_{2|1}(\{a_3 \}t_3 | \{ a_2\}t_2;\{ a_1\}t_1 )=P_{1|1}(\{a_3 \}t_3 | \{ a_2\}t_2 ) \end{align} Proceeding with the algorithm to get transition probabilities for following times $t_4,t_5, \dots t_n$ we see that the general argument \begin{align} P_{1|n-1}(\{a_n \}t_n;\{ a_{n-1}\}t_{n-1}; \dots ; | \{a_2 \}t_2;\{ a_1\}t_1)=P_{1|1}(\{ a_{n-1}\}t_{n-1}| \{a_n \}t_n) \tag{2} \end{align} is true. This equation restates the fact, that one $P_{1|1}(\{ a_{n-1}\}t_{n-1}| \{a_n \}t_n)$ contains all the necessary information of the markov process (and is independent of preceeding transition probabiliteis $\{ a_1\}t_1;\{a_2 \}t_2 ; \dots$) and that given an initial PDF, the transition probability at all future times can be calculated.

## Statonary Markov Processes

The above discussed example, satisfies not only (2), the transition probability also does not explicitly depend on time $P(\{ a_1\}t_1)=P(\{ a_1\})$. The probabilities $p,q$ and hence $P_{1|1}(\{ a_1\}t_1|\{a_n \}t_n)$ only depend on the time difference $\Delta t =t_2-t_1$, but not on time explicitly. Such processies are called *Stationary Markov Processies* (not to confuse with a stationary random process). We can thus write the transition probability as
\begin{align}
P_{1|1}(\{ a_1\}t_1|\{a_n \}t_n)=P_{1|1}(\{ a_1\}|\{a_2 \},(t_2-t_1))=P_{1|1}(\{ a_1\}|\{a_2 \},\Delta t)
\end{align}
and arrive at the **Chapman-Kolmogorov equation**
\begin{align}
P_2(\{ a_1\}|\{a_2 \}, t)=\sum_{\{ a\}} P_{1|1}(\{ a\}|\{a_2 \},\Delta t)\, \cdot \,P_{2}(\{ a_1\}|\{a \},t-\Delta t)
\end{align}
which gives the probability $P_2$ that given a state $\{ a_1\}$, $\{ a\}$ and then $\{ a_2\}$ is reached during $\Delta t$, by summing over all possible ways $\{ a\}$.