The Repressilator

From bio-physics-wiki

Jump to: navigation, search

The Repressilator is a synthetic genetic circuit capable of oscillations. The fathers of this ground breaking work are Michael Elowitz and Stanislas Leibler, who published their in the year 2000 in the renovned journal Nature. The Repressilator is the experimental evidence, that oscillations that have been observed for biochemical oscillations (e.g. Glycolytic Oscillations) can also occur on the gene regulatory level. The synthetic transcription network is designed with three repressors ($\lambda$cI, LacI, TetR) on a recombinant plasmid. A separate plasmid reports the TetR levels via a GFP reporter.

Repressilator plasmids:$\lambda$cI represses LacI, which represses TetR which in turn represses $\lambda$cI

The Dynamical Systems that describes the dynamics of the mRNA ($m_i$) and Protein ($p_i$) levels for $i=1,2,3$ with time is given by \begin{align} \frac{dm_i}{dt}&=-K_m m_i + \frac{\gamma}{1+K_bp_{i-1}^n}+ \gamma_0\\ \frac{dp_i}{dt}&=-K_p p_i + T m_i\\ \end{align} The constants $K_m$ and $K_p$ are degradation constants, $\gamma_0$ is the leakiness of the repressor, $T$ is the trascription rate and $\frac{\gamma}{1+K_bp_{i-1}^n}$ is the Hill function for repression, which can be derived with methods of Statistical Mechanics. The Hill function is characterised by the maximal repressor binding rate $\gamma$ and the repression coefficient $K_b$. Notice, the nonlinearity of the Hill function makes this system of ordinary differential equations nonlinear. We can bring this equation into dimensionless form by writing $p_i$ in units of $K_b^{-1/n}$, $m_i$ in units of $K_p/(TK_b^{1/n})$ and time in units $K_m^{-1}$. The above system becomes in dimensionless form \begin{align} \frac{dm_i}{dt}&=- m_i + \frac{\alpha}{1+p_{i-1}^n}+ \alpha_0\\ \frac{dp_i}{dt}&=- \beta p_i + \beta m_i\\ \end{align} with parameters \begin{align} \alpha&=\frac{\gamma K_b^{1/n}T}{K_pK_m}\\ \alpha_0 &=\frac{\gamma_0 K_b^{1/n}T}{K_pK_m}\\ \beta&=\frac{K_p}{K_m} \end{align} Since the ODE system is nonlinear no exact analytical solution can be found, but we can find an approximate solution of the linearized system and we can find solutions of the nonlinear system by numerical integration. The fixed points of the system can be found by solving the simultanious equations \begin{align} 0&=- m_i + \frac{\alpha}{1+p_{i-1}^n}+ \alpha_0\\ 0&=- \beta p_i + \beta m_i\\ \end{align} We immediatly see that $p_i=m_i$ must hold and the remaining equation can be written \begin{align} 0&=- m_1 + \frac{\alpha}{1+m_{3}^n}+ \alpha_0\\ 0&=- m_2 + \frac{\alpha}{1+m_{1}^n}+ \alpha_0\\ 0&=- m_3 + \frac{\alpha}{1+m_{2}^n}+ \alpha_0\\ \end{align} Solving this system gives the equation $m_i=f(f(f(m_i)))$ with \begin{align} f(x)=\frac{\alpha}{1+x^n}+\alpha_0 \end{align} The roots of this equation gives the fixed points. Finding roots of nonlinear simultanious equation is generally nontrivial and can be found by "Mathematicas" solve function. If $f(x)$ is zero, then $f(f(f(m_i)))$ is zero too and in our case $f(x)$ is monotonically decreasing, this means, the zeros of $f(x)$ equals the zeros $f(f(f(m_i)))$. Thus the fixed point is given by \begin{align} m=\frac{\alpha}{1+m^n}+\alpha_0 \end{align} Taylor series expansion of the ODE system gives \begin{align} \mathbf{G}(\mathbf{v}) \approx \underbrace{\mathbf{G}(\mathbf{v}_0)}_{=0} + \underbrace{\frac{\partial G_i(\mathbf{v}_0)}{\partial v_j}}_{\mathbf{J}} \cdot \delta v_j + o(\| \delta v_j \|) \end{align} The Jacobian of the linearized system $\mathbf{J}$ is \begin{align} \mathbf{J}=\frac{\partial G_i(\mathbf{v}_0)}{\partial v_j}=\begin{bmatrix} -1 & 0& 0& 0& 0& X \\ 0 &-1 &0 &X &0 &0 \\ 0& 0& -1& 0 &X &0 \\ \beta & 0& 0 &-\beta &0 &0 \\ 0& \beta & 0 & 0 &-\beta &0 \\ 0& 0& \beta &0 &0 &-\beta \end{bmatrix} \end{align} where $X$ is the derivative of the dimensionless Hill function at the fixed point $m$ \begin{align} X=f'(m)=\frac{-\alpha}{(1+x^n)^2} \cdot n x^{(n-1)} \end{align} Now we can solve the linear system \begin{align} \frac{d \mathbf{v}}{dt}=\mathbf{J} \cdot \mathbf{v} \end{align} with \begin{align} \mathbf{v}=\begin{bmatrix} \delta m_1 \\ \delta m_2 \\ \delta m_3 \\ \delta p_1 \\ \delta p_2 \\ \delta p_3 \end{bmatrix}=\begin{bmatrix} \delta \mathbf{m} \\ \delta \mathbf{p} \end{bmatrix} \end{align} The system has the solution \begin{align} \mathbf{v}(t)=e^{\mathbf{J} t} \mathbf{v}(0) \end{align}

Stability Analysis

The Jacobian satisfies the Eigenvalue equation \begin{align} \mathbf{J}\begin{bmatrix} \delta \mathbf{m} \\ \delta \mathbf{p} \end{bmatrix}= \lambda \begin{bmatrix} \delta \mathbf{m} \\ \delta \mathbf{p} \end{bmatrix} \end{align} In blockmatrix form this equation becomes \begin{align} \begin{bmatrix} -\delta \mathbf{m} + \mathbf{X} \delta \mathbf{p}\\ \beta \delta \mathbf{m} -\beta \delta \mathbf{p} \end{bmatrix}= \lambda \begin{bmatrix} \delta \mathbf{m} \\ \delta \mathbf{p} \end{bmatrix} \end{align} (compare the LHS with the Jacobian). Now we rearrange to get a system of matixequations \begin{align} \mathbf{X} \delta \mathbf{p} &= (\lambda +1) \delta \mathbf{m} \\ \delta \mathbf{m} &= \frac{(\lambda+ \beta)}{\beta} \delta \mathbf{p}\\ \end{align} or equivalently \begin{align} \mathbf{X} \delta \mathbf{p} &= \frac{(\lambda +1)(\lambda+ \beta)}{\beta} \delta \mathbf{p}\\ \delta \mathbf{m} &= \frac{(\lambda+ \beta)}{\beta} \delta \mathbf{p}\\ \end{align} Now we see, that there is a relation between the Eigenvalues of $\mathbf{X}$, we call them $\tau$ and $\lambda$ the Eigenvalues of the whole system. \begin{align} \frac{(\lambda +1)(\lambda+ \beta)}{\beta} = \tau\\ \lambda^2+(1+\beta)\lambda+ \beta(1-\tau)=0 \end{align} solving for $\lambda$ we have \begin{align} \lambda_{1,2}&=\frac{1}{2} \left( -(1+\beta)\pm \sqrt{(1+\beta)^2-4\beta(1-\tau)} \right)\\ \lambda_{1,2}&=\frac{1}{2} \left( -(1+\beta)\pm \sqrt{\beta^2+2\beta+1-4\beta(1-\tau)} \right)\\ \lambda_{1,2}&=\frac{1}{2} \left( -(1+\beta)\pm \sqrt{(1-\beta)^2+4\beta \tau} \right)\\ \tag{1} \end{align} The Eigenvalues of a Permutationmatrix always lie on the complex unit circle, so that $|\lambda |=1$. A $n$-dimensional Permutationmatrix has the Eigenvalues which satisfy $\lambda^{k}=1$ for some integer $k$.
Proof: The $n$-dimensional permutation matrix has the property $\mathbf{P}_k^{k+1}=\mathbf{P}_k$, thus the eigenvalue equation becomes \begin{align} \mathbf{P} \mathbf{v}= \lambda \mathbf{v}\\ \mathbf{P}^{k+1} \mathbf{v}= \lambda^{k+1} \mathbf{v}=\lambda \mathbf{v}\\ \lambda^{k}=1 \end{align} The repressilator with \begin{align} \mathbf{X}=\begin{bmatrix} 0& 0& X \\X &0 &0 \\ 0 &X &0 \\ \end{bmatrix}=X \cdot \begin{bmatrix} 0& 0& 1 \\1 &0 &0 \\ 0 &1 &0 \\ \end{bmatrix} \end{align} has, after factoring $X$ out, the permutation matrix \begin{align} \begin{bmatrix} 0& 0& 1 \\1 &0 &0 \\ 0 &1 &0 \\ \end{bmatrix} \end{align} with $k=3$. Thus the zeros of $\tau$ on the complex unit circle are given by the equation $\tau^3=1$, with the solution $\tau_{1,2,3}=e^{i2\pi \frac{m}{k}}$ for $m=1,2,3$ in the complex plane.

roots of unity

\begin{align} \tau_{1,2,3}=X,Xe^{i2\pi/3},Xe^{i4\pi /3}=X,X (1/2+\sqrt{3}/2i), X (1/2-\sqrt{3}/2i) \end{align} Inserting solutions for $\tau$ into (1) will give us the six eigenvalues of the linearized repressilator system \begin{align} \lambda_{1,2}&=\frac{1}{2} \left( -(1+\beta)\pm \sqrt{(1-\beta)^2+4\beta \tau} \right)\\ \end{align} A steady state is stabele if the real part of $\lambda$ is negative. For $\tau_1 =X=1$ we have \begin{align} \lambda_{1,2}&=\frac{1}{2} \left( -(1+\beta)\pm \sqrt{(1-\beta)^2+4\beta \tau} \right) \\ &=\left( -(1+\beta)\pm \sqrt{(1-\beta)^2+4\beta} \right)\\ &\rightarrow \left( -(1+\beta)+(1+\beta) \right)=0\\ \end{align} Thus for $X<1$, $\lambda$ will be negative. This is secured since $X$ is by definition negative. $\tau_2$ and $\tau_3$ complex conjugates of each other, they both lead to the same condition for stability. \begin{align} Re\left( -(1+\beta)\pm \sqrt{(1-\beta)^2 -2\beta X +2\beta X \sqrt{3}i} \right)<0\\ \end{align} We write the term under the square-root as complex exponential \begin{align} \underbrace{(1-\beta)^2-2 \beta X}_{\rho \, cos(\theta)} +\underbrace{\beta X \sqrt{3}}_{\rho \, sin(\theta)}i= \rho \, e^{i\theta} \\ \end{align} This way we can deal with the square root \begin{align} Re\left( -(1+\beta)\pm \sqrt{\rho \, e^{i\theta}} \right)=Re\left( -(1+\beta)\pm \sqrt{\rho} \, e^{i\theta/2} \right)<0\\ \end{align} and get for the real part \begin{align} \pm \sqrt{\rho} \, cos(\theta/2) < (1+\beta)\\ \end{align} For $+$ the positive sign the equation is always fulfilled since the cosine is negative. The interesting case is for the one with the negative sign \begin{align} - \sqrt{\rho} \, cos(\theta/2) < (1+\beta)\\ \tag{2} \end{align} We can use the half-angle formula \begin{align} cos^2(\theta /2)=\frac{1+cos(\theta)}{2} \end{align} squaring (2), we can plug in the half-angle formula \begin{align} \rho \, cos^2(\theta/2)=\rho \frac{1+cos(\theta)}{2} < (1+\beta)^2\\ \end{align} we can square again and plug back in for the cosine term and $\rho$ \begin{align} \rho^2 \frac{(1+cos(\theta))^2}{4}= \frac{(\rho + \rho \, cos(\theta))^2}{4} < (1+\beta)^4\\ \end{align} We find the condition \begin{align} \frac{3\beta X^2}{(4+2X)} < (1+\beta)^2 \end{align} The graph of $\frac{3 X^2}{(4+2X)} = \frac{(1+\beta)^2}{\beta}$ separates the stable and unstable parameter regions.


For parameters that lie on the graph the eigenvalues are purely imaginary and the system performs sustained oscillations with constant amplitude.

Further Reading:

  • Michael B. Elowitz & Stanislas Leibler - A synthetic oscillatory network of transcriptional regulators. Nature 2000 [1]
  • Rob Phillips - Physical Biology of the Cell [2]