Here’s a classic result in complexity theory:

.

Why is this the case?

BPP is the set of languages for which there is an algorithm , which takes as input a random seed of length in addition to an input of length , such that:

  • For any true statement , for at least a fraction of the total random seeds, the algorithm on random seed outputs YES.
  • For any false statement , outputs YES on at most a fraction of the random seeds.

How to distinguish between the case that there are many random seeds on which the algorithm accepts and the case that there are very few?

Key observation:

  • If has measure very close to , then there exists a small set of offsets such that covers the entire hypercube.
  • If has measure very close to , then there is no small set of offsets such that covers the entire hypercube.

More precisely, how many shifts of a set of measure do we need before it covers the entire hypercube? Well, if I take a random shift, the probability that any particular point is not covered is at most . If I take many shifts, the probability that some particular point is not covered by any of these shifts is at most . The probability that there is any point which is not covered is at most by a union bound.

For our particular case, setting we have:

so by the probabilistic method there exists a set of shifts such that the entire hypercube is covered.

However, it’s also the case that in the NO case, this set of shifts will still definitely not cover the hypercube.

So here’s the formula corresponding to the BPP problem:

In less fancy language:

There is a small set of shifts such that for every seed, you can shift the random seed into a random seed where the algorithm will work.

This is true for YES instances in BPP and false for NO instances in BPP.