There are really two questions here:

  1. Does my objective function change over time?
  2. Do I cause this change? This issue is of course highly related to Should I change my objective function?

The answer to Q1 is easy: yes. This is an empirical question of explanation, not prediction.

For example, “caring about people” has larger weight in Alek’s objective function than in Alek’s objective function.


Q2 is more challenging. First, to even discuss this we have to agree that determinism is useless. So, I can choose things.

Here’s an example of what changing my objective function might look like:

My friend tells me about how they decided to stop eating sugar desert things because they realized that they were unhealthy.
I had never thought of this, and this new information causes me to update the weights in my objective function: I value sugar less and eating healthily more.

Again it’s possible to be pedantic and say “your objective function is just to do the best things according to your current data, so obtaining data doesn’t mean you are updating your objective function”. But taboo resolves the issue. What I mean by “can I change my objective function” is what is reflected in the example above. Specifically, it means I update the weights that I use to compute my score “how desirable is this universe state”.

Now, whether I always act to optimize my objective function is a different question, the answer to which is no, both due to lack of information about the future and mostly actually akrasia.