Imagine you are taking a random walk and you go up with probability \(p\) and down with probability \(1-p\) excpet if you are at the origin then you must go up.

You might ask the following question: in expectation, how long should it take to get distance \(n\) away from the origin.

It turns out there are three regimes.

  1. If \(p=\frac{1}{2}+\varepsilon\) for constant \(\varepsilon>0\) then you will reach \(n\) after about \(\frac{n}{\varepsilon}\) steps in expectation or something.

  2. If \(p=\frac{1}{2}\) then you reach \(n\) after about \(n^{2}\) steps. You can see this by a chernoff bound or by noting that \(n^{2}-t\) is a martingale

  3. If \(p=\frac{1}{2}-\varepsilon\) then here is an arguemnt for why it will take about \((\frac{1+2\varepsilon}{1-2\varepsilon})^n\) steps:

consider walking from some height \(h\) either to height \(2h\) or back to \(0\). If you have a path going up you can reflect and get a path going down. But that path would have much much higher weight. By exponential factor.

Let \[\lambda = \frac{1+2\varepsilon}{1-2\varepsilon}>1.\] Now to get to all the way to \(n\) you must do something like \[\lambda^{1}\lambda^{2}\lambda^{4}\cdots \lambda^{\frac{n}{2}} =\lambda^{n}.\]

Wow thats a lot.

Don’t believe it? see for yourself!

import matplotlib.pyplot as plt
import numpy as np

p = 0.45
def val(n):
    A = np.array([[0.0 for i in range(n+1)] for j in range(n+1)])
    b = np.array([1.0 for i in range(n+1)])
    b[n] = 0

    A[0][0] = 1
    A[0][1] = -1

    for i in range(1,n):
        A[i][i] = 1
        A[i][i-1] = -(1-p)
        A[i][i+1] = -p

    A[n][n] = 1

    F = np.linalg.solve(A,b)

    return F[0]

NS = range(2,100)
vals = [val(n) for n in NS]
eps = 1/2 -p
bob = (1+2*eps)/(1-2*eps)
points = [bob**(1.1*n) for n in NS]
plt.plot(vals)
plt.plot(points)
plt.show()