This is a fundamental result in learning theory and crypto.

Discussion based on some lecture notes by Ryan O’Donnel.

Def: is a hardcore predicate for OWF if any PPT algorithm has negligible advantage at guessing given .

Fact: OWP + hardcore predicate PRG

GL Theorem: If is a OWP, then

  • is also a OWP,
  • and is a hardcore predicate.

Apparently this follows as a simple corollary of a related thing:

GL Theorem prime Given query access to , find the large Fourier coefficients of .

More specifically, given want to output a list containing all the coefficients with weight , and such that at most a -fraction of the list is coefficients of weight .

Lemma Can estimate Fourier coefficients by sampling to approx .


Let’s prove GL theorem prime:

Lemma 1 At most frr coeffs with weight larger than .

Pf:

We have .

Lemma 2 Suppose we have a “wild card indicator string” something like this 10xx110xx0x110x. What this really refers to is the set of all ways of completing this thing to a binary string by replacing the ‘s with . Anyways, it turns out that we can efficiently estimate the weight of one of these things!

proof: Deferred

vibes are just “this is a reasonably nice expression — so we are justified in guessing it has a nice fourier analytic interpretation”.

Suppose that we had the above thing figured out. Then you can do the classic tree thing to find all large Fourier coefficients.

Remark If your function has a fraction of Fourier mass concentrated in a small number of Fourier coefficients then this means that we can learn the function.


how does this relate to the crypto thing?

well, if , then for many it’s the case that

This means that is strongly correlated with a character / has a large fourier coefficient, so we can go rip out that fourier coefficien. We use GL to get a list of all the big fourier coefficients, and then try to find one that’s consistent with the data. Then we output that. This means we inverted with probability , which is impossible.