Q: To what extent should arguments of the following form influence policy updates?
Argument: If reality has property
(which is logically possible), then performing action yields infinite (or extremely large) utility.
Reality does not have any propertyunder which action yields negatively infinite (or unboundedly negative) utility.
Thus, a rational agent must perform action.
Quick response:
- I think expected utility maximization is generally the right framework to reason under uncertainty.
- Maybe there are some weird things that go on if you have epistemic uncertainty about the probabilities in question.
- But most of the discomfort around âpascal wagersâ comes from scope insensitivity / the fact that this is not a problem that human brains are optimized to think about.