Negative Autoregulation

From bio-physics-wiki

Jump to: navigation, search

Negative autoregulation (NAR) occurs when the product of a gene represses its own production. NAR is a common network motif in transcritpton networks.

Negativautoregulation.png

We use separation of time scales as in simple gene regulation and can neglect the delay through the time used to activate the transcription factor, since it is much smaller than the time needed for transcription. Then the production rate of protein $Y$ depends only on the amount of protein $Y$ present, the higher the concentration of $Y$ the lower the production rate $\beta$, thus the production rate $\beta(Y)$ is a function of $Y$. A good approximation for $\beta(Y)$ is the decreasing Hill function. \begin{align} \beta(Y)=\frac{\beta_{max}}{1+(Y/K)^n} \end{align} This leads to a nonlinear ODE that describes the change in concentration of protein $Y$ \begin{align} \frac{dY}{dt}=\frac{\beta_{max}}{1+(Y/K)^n}-\alpha Y \end{align} If we plot the function for production and degradation in the same diagram, we can recognise important features.

$dY/dt$ with Hill function parameters $\beta=1$, $K=0.5$, $n=4$ and $\alpha=1$

On the left side of the fixed point $\beta(Y)$ is always larger than $\alpha Y$, thus $dY/dt>0$ is positiv and the concentration of $Y$ increases. On the right side of the fixed point $\beta(Y)$ is always smaller than $\alpha Y$, this causes the concentration to decrease $dY/dt < 0$. A comparison of simple regulation and negative autoregulation shows, that the distance between $\beta(Y)$ and $\alpha Y$ (orange arrows, see picture below) is in NAR always larger than in simple regulation.

NARsimplecompare.png

This means that the rate of change $dY/dt$ in NAR is always larger and thus the fixed point at $Y_{st}$ is reached faster. Thus the function of NAR is to speed the response time. Evolution built in NAR where ever it is an advantage to produce proteins quickly.

Negativ autoregulation speeds the response time.

If we consider the idealised case, where we make the logic approximation, meaning that $\beta(Y)$ is a stepfunction $\theta$ so that $Y$ is eather produced or not, depending on the value of $Y$. For $Y<K$, $\theta(Y<K)$ is one, whereas for $X>K$ $\theta(X<K)$ is zero \[\theta(X<K) = \begin{cases} 1 & \mbox{if } Y<K \\ 0 & \mbox{if } Y>K \\ \end{cases} \] Then the production rate is \begin{align} \beta(Y)=\beta_{max} \theta(Y<K) \end{align} as shown in the following diagram, the production of protein $Y$ stops when $Y=K=Y_{st}$

NARstep.png

In NAR the steady state level $Y_{st}$ approximately equals the repression coefficient $K$

Using the logic approximation we see, that the steady state $Y_{st}$ can be decoupled from the production rate $\beta$. So that the response time can be individually fine tuned by altering $\beta$ through the promoter sequence and indipendently $Y_{st}$ can be fine tuned by adjusting $K$ through alterations in the operator site. Keep in mind, that the stepfuction is an idealisation, thus $K=Y_{st}$ is just an approximation.

The red curve represents the dynamics of NAR with parameters $\beta =5$, $n=100$ (~stepfunction), $K=1$; the blue curve shows NAR with $\beta =5$, $n=4$, $K=0.725$; the yellow curve describes simple regulation with $\beta=1$; all curves share that the degradation rate of their functions is $\alpha=1$.

Taylor series expansion of $Y(t)$ yields around $t=0$ the time course $Y(t)=\beta_{max} \cdot t$. We use this to calculate the response time \begin{align} \beta T_{1/2}=Y_{st}/2=K/2\\ T_{1/2}=\frac{K}{2\beta} \end{align}

Circuits with negative autoregulation not only speed the response time, they also show increased robustness against parameter fluctuations, which is a result of high Hill coefficients.


For $Y/K>>1$ we can neglect the term 1 in the decreasing Hill function. Then we are left with the still nonlinear differential equation for the change in concentration of protein $Y$. \begin{align} \frac{dY}{dt}=\frac{\beta_{max}}{(Y/K)^n}-\alpha Y \end{align} with the steady state \begin{align} \alpha Y_{st}=\frac{\beta_{max}K^n}{Y_{st}^n}\\ \alpha Y_{st}^{(n+1)}=\beta_{max}K^n\\ Y_{st}= \left[ \frac{\beta_{max}}{\alpha}K^n \right]^{1/(n+1)} \end{align}

This is exactly a bernoulli differential equation (BDE), which is maybe a little easier to see in the form. \begin{align} Y'+\alpha Y=(\beta_{max} K^n) \cdot Y^{-n} \end{align} The equation can be solved analytically through the transformation. \[w=Y^{(n+1)}\] \[w'=(1+n)Y^{n}Y'\] This gives the linear equation equation, which can be better seen after rewriting the equation as \begin{align} Y'Y^n+\alpha Y^{n+1}=(\beta_{max} K^n) \end{align} \[\frac{w'}{1+n} + \alpha w = \beta_{max} K^n\] That can be solved via integrating factor $u=e^{\int (1+n) \alpha dt}$. This gives \[u w' + u (1+n) \alpha w= u \beta_{max} K^n (1+n)\] \[(wu)'= u \beta_{max} K^n (1+n)\] \[wu= \int u \beta_{max} K^n (1+n) dt + C\] \[w= \left( \int e^{-\int (1+n) \alpha dt} \int e^{\int (1+n) \alpha dt} \beta_{max} K^n (1+n) dt +C \right)\] transforming back gives \[Y= \left( e^{- (1+n)\alpha t} \left[(1+n) \int e^{ (1+n)\alpha t} \beta_{max} K^n dt +C \right]\right)^{1/(n+1)}\] \[Y= \left( e^{-(1+n)\alpha t} \left[\frac{(1+n)}{(1+n)\alpha t} e^{(1+n)\alpha t} \beta_{max} K^n +C \right]\right)^{1/(n+1)}\] With intitial condition $Y(t=0)=0$ we get \[Y(0)= \left( \frac{\beta_{max}}{\alpha } K^n +C \right)^{1/(n+1)}=0\] \[C=-\frac{\beta_{max}}{\alpha } K^n=-Y_{st}^{(n+1)}\] So we finally get the solution \[Y(t)=Y_{st} \left( 1 -e^{-(1+n)\alpha t}\right)^{1/(n+1)}\]







Video Lecture:

Further reading:

  • Uri Alon - An Introduction to Systems Biology