Imagine you are taking a random walk and you go up with probability \(p\) and down with probability \(1-p\) excpet if you are at the origin then you must go up.
You might ask the following question: in expectation, how long should it take to get distance \(n\) away from the origin.
It turns out there are three regimes.
If \(p=\frac{1}{2}+\varepsilon\) for constant \(\varepsilon>0\) then you will reach \(n\) after about \(\frac{n}{\varepsilon}\) steps in expectation or something.
If \(p=\frac{1}{2}\) then you reach \(n\) after about \(n^{2}\) steps. You can see this by a chernoff bound or by noting that \(n^{2}-t\) is a martingale
If \(p=\frac{1}{2}-\varepsilon\) then here is an arguemnt for why it will take about \((\frac{1+2\varepsilon}{1-2\varepsilon})^n\) steps:
consider walking from some height \(h\) either to height \(2h\) or back to \(0\). If you have a path going up you can reflect and get a path going down. But that path would have much much higher weight. By exponential factor.
Let \[\lambda = \frac{1+2\varepsilon}{1-2\varepsilon}>1.\] Now to get to all the way to \(n\) you must do something like \[\lambda^{1}\lambda^{2}\lambda^{4}\cdots \lambda^{\frac{n}{2}} =\lambda^{n}.\]
Wow thats a lot.
Don’t believe it? see for yourself!
import matplotlib.pyplot as plt
import numpy as np
= 0.45
p def val(n):
= np.array([[0.0 for i in range(n+1)] for j in range(n+1)])
A = np.array([1.0 for i in range(n+1)])
b = 0
b[n]
0][0] = 1
A[0][1] = -1
A[
for i in range(1,n):
= 1
A[i][i] -1] = -(1-p)
A[i][i+1] = -p
A[i][i
= 1
A[n][n]
= np.linalg.solve(A,b)
F
return F[0]
= range(2,100)
NS = [val(n) for n in NS]
vals = 1/2 -p
eps = (1+2*eps)/(1-2*eps)
bob = [bob**(1.1*n) for n in NS]
points
plt.plot(vals)
plt.plot(points) plt.show()