In this short post I will comment about what I learned from reading William Kuszmaul’s “Train Tracks with Gaps” paper in FUN 2021.

general notes

  1. The paper is indeed pretty fun. Check it out yourself.
  2. There is a lot of clever union bounding that goes on
  3. “do each thing separately independently randomly with some probability p” is generally a pretty good strategy
  4. Lower bound proof: McDiarmid’s Inequality is a generalization of Chernoff bound. If you have a random variable \(F\) that depends on a bunch of random variables \(A_1,A_2,\ldots, A_n\), but alterning any one particular \(A_i\) has a bounded effect on \(F\) (“Lipshitz Condition”) then we can get an exponential concentration bound on \(F\).
  5. Often useful to consider the “counts” random variabe even when you just care about whether or not something is \(0\). Because you can analyze expectation and variance of a counts random variable.
  6. Now we discuss the three algorithms for building track. Like we know from a probabilistic argument that a valid track exists, and now we want an efficient algorithm to build it. Of course an innefficient trivial brute force algorithm exists. But this is not so interesting in exponentially-sized search spaces.

Zeroth Algorithm:

Actually the existence proof via prob method with alterations already gives a trivial algorithm that works. But we are going to talk about some other algorithms that work and give same guarantees for educational purposes.

First Algorithm:

“Method of conditional probabilities” The story is you have random variables \(X_1,\ldots, X_\ell\) and a function \(F\) such that \(\mathop{\mathrm{\mathbb{E}}}[F(X_1,\ldots, X_\ell)]\le R\). And you want to find settings \(x_1,\ldots, x_\ell\) which make \(F(x_1,\ldots, x_\ell)\le R\). Then we can do this itteratively.

Our \(X_i\) are boolean random variables whether the track exists that occur with probability Our \(F\) is the amount of train track required if you take the \(X_i\)’s which are turned on and then add stuff to fix the places that are broken

In particular, if we have selected \(x_1,\ldots, x_k\) so that \[\mathop{\mathrm{\mathbb{E}}}[F(x_1,\ldots, x_k, X_{k+1},\ldots, X_\ell)]\le R\] then there must be \(x_{k+1}\) which makes \[\mathop{\mathrm{\mathbb{E}}}[F(x_1,\ldots, x_k, x_{k+1},X_{k+2}\ldots, X_\ell)]\le R\]

So, the only trick is making it easy to compute this expectation.

and you can do this for the train thing.

Second Algorithm:

First we review the Lovasz Local Lemma: you have independent random variables \(X_1,\ldots, X_s\) Then you have events \(E_1,\ldots, E_m\) such that each \(E_i\) is determined by some set of \(X_j\)’s. If each event is unlikely, i.e. there exists small \(p\) such \(\Pr[E_i]\le p\) for all \(i\), and the dependencies between the events are low, i.e., there exists small \(d\in \mathbb{N}\) such that each \(E_i\) depends on some shared \(X_j\) with at most \(d\) other \(E_{i'}\)’s, in particular if we have \(pd\le \frac{1}{e}\) then there is positive probability that none of \(E_1,\ldots, E_m\) occur.

Algorithmic Lovasz Local Lemma is an efficient algorithm to find this. “Fix-it algorithm”: Sample the \(X_i\)’s randomly. So long as one of the events \(E_j\) happened resample its \(X_i\)’s. The expected number of resamplings that need to happen is at most \(\frac{n}{d}\)

Third Algorithm:

Min-hashing

You have a collection of (not-necessarily-disjoint) sets and want to sample an element from each of them.

How you do it: assign each element a random value in \((0,1)\). For each set take the element with lowest value.

It turns out that this covers all the sets while using a very small number of elements.

Summary

Dang these are some neat ideas! I bet they are applicable to lots of problems.