Suppose that we want to sample from an HMM with complicated transition rules, conditional on some observations. (If the transition rules are Gaussian you can use Kalman filters, and if they are discrete then you do classical belief propagation).

Let’s say there is a sequence of hidden states and we get a sequence of observation values .

There are some simple update rules governing the evolution of our pr dist on ---

We’re going to maintain particles and weights (which sum to ) such that

Given this approximation, we update as

Then, ideally we’d choose some new particles to approximate this distribution and keep going.

In reality it’s not totally clear how to choose the new particles --- for instance sampling from the correct distribution might be computationally challenging.

So we’ll instead use importance sampling!

We set which we can actually sample from, and then we sample our ‘s from this distribution and then we reweight them by

except you should normalize the weights to sum to one.