Biased reward-learning in CIRL

https://www.lesswrong.com/posts/YHQZHbhx7afHJ5Esw/biased-reward-learning-in-cirl

Contents

Manipulation actions

My informal critique of CIRL is that it assume two untrue facts: that \mathbf{H} knows \theta (ie knows their own values) and that \mathbf{H} is perfectly rational (or noisly rational in a specific way). Since I’ve been developing more machinery in this area, I can now try and state this more formally. Assume that M always starts in a fixed state s_0, that the reward is always zero in this initial state (so R(s_0,\cdot,\cdot;\cdot)=0), and that transitions from this initial state are independent of the agent’s actions (so T(s|s_0,\cdot,\cdot) is defined indendently of the actions). This makes \mathbf{R}’s initial action a^\mathbf{R}_0 irrelevant (since \mathbf{R} has no private information to transmit). Then let \pi^\mathbf{H} be the optimal policy for \theta, and (\pi^\mathbf{H})' be the optimal policy for \theta' (this \theta' may be either independent of or dependent on \theta). Among the action set \mathcal{A}^\mathbf{R} is a manipulative action a' (this could involve tricking the human, drugging them, brain surgery, effective propaganda, etc...) If a^\mathbf{R}_0=a', the human \mathbf{H} will pursue (\pi^\mathbf{H})'; otherwise, they will pursue \pi^\mathbf{H}. If we designate I' as the indicator variable of a^\mathbf{R}_0=a' (so it’s 1 if that happens and 0 otherwise), then this corresponds to following the compound policy: \pi=I'(\pi^\mathbf{H})'+ (1-I')\pi^\mathbf{H}.This is well defined as policies map past sequences of states and actions, and I' is well-defined given past actions, so the expression does map sequences of states and actions (and \theta) to actions.

Decomposing the human policy

What is \mathbf{R} to do with that strange compound policy? Let’s assume that \mathbf{R} doesn’t know \theta or \theta', but does know \mathbf{H} sufficiently to predict the compound nature of \pi. In one approach, \mathbf{R} can see the policy \pi as partially irrational. So it decomposes \pi as (p,R) as in this paper, with R as the ‘true reward’ and p a map from rewards to policies, which encodes \mathbf{H}’s rationality. The pair is compatible with the human policy if p(R)=\pi. Presumably here, R=R(\cdot,\cdot,\cdot,\theta) would eventually be deduced as the true reward. But that very same paper shows that (p,R) cannot be deduced from \pi, so \mathbf{R} would have to have some extra information (some ‘normative assumptions’) to allow for that decomposition. We might be tempted to have it simply recognise the manipulative nature of a', but if \mathbf{R} could classify all its manipulative actions, there wouldn’t be any problem in the first place (and this would be tantamount to knowing the decomposition (p,R) anyway).

Multiple rewards, or compound rewards

Note that there is generically no \theta'' that corresponds to the policy \pi. One might be tempted to say that \mathbf{H} is maximising the compound reward: I'R(\cdot,\cdot,\cdot;\theta)+(1-I')R(\cdot,\cdot,\cdot;\theta').But that is not a valid reward, because I' is defined over histories of states and actions, while the reward meta-function R only take the last state and actions. In this circumstance, \mathbf{R} is in practice choosing the human reward through its initial action. Assuming it has some non-trivial information about \theta and \theta', all the issues about biasing and influencing rewards comes to the fore (technically, the setup I’ve described isn’t rich enough to allow for influential unbiased actions, but it can be easily enriched to allow that). The \mathbf{R} will thus choose a' or not as its first action, depending on whether it expects R(\cdot,\cdot,\cdot;\theta) to be easier or harder to maximise than R(\cdot,\cdot,\cdot;\theta'). Another alternative is to extend the definition of rewards, to allow them to be defined over complete histories of states and actions, not just the last one. If we require that all such rewards be parameterised by elements of \Theta, then there exists a \theta'' such that R(\cdot;\theta'') = I'R(\cdot;\theta)+(1-I')R(\cdot;\theta').In that case \mathbf{R} can conclude that the human is rationally signalling that it knows \theta'', and \mathbf{R} is technically immune to bias issues, since \mathbf{R} is merely updating its priors on \Theta, rather than choosing the human reward. There are three problems with this perspective. The first is that it’s wrong: the human knows \theta, not \theta''. The second is that though \mathbf{R} is not choosing the human reward in theory, it is choosing it in practice. Whether it chooses a' as its first action or not, depends on its estimate for the value of R(\cdot;\theta) versus R(\cdot;\theta'), so the issues of bias and influence return. And finally, since optimal policies are unchanged by affine transformations of rewards, the policy \pi is also compatible with the reward functions: I'(aR(\cdot;\theta)+b)+(1-I')(a'R(\cdot;\theta)+b'),for any a,a'>0 and any b,b'. So whether R(\cdot;\theta) or R(\cdot;\theta') is chosen depends also on the prior over all those compatible reward functions.

Identifying compound rewards

But note that the third point (prior dependence) can be made to compensate for the second one (value of R(\cdot;\theta) versus R(\cdot;\theta')). The constants a, a', b, and b' can be seen as normalisation constants. So if R(\cdot;\theta'') can be identified as a compound reward, maybe we can adjust the priors so that R(\cdot;\theta) and R(\cdot;\theta') are normalised to having comparable value, so that there is no bias pressure to choose one or the other. This is similar to the indifference approaches. The main problem here is the same that comes up in the discussion of grue and bleen and induction. ‘Compound reward’ is not a natural category. Just as R(\cdot;\theta'') can be written as a compound mix of the other two rewards, we can define R(\cdot;\theta''')=I'R(\cdot;\theta')+(1-I')R(\cdot;\theta), then since I'(1-I') is always 0, we can write the ‘basic’ rewards as compound rewards: \begin{array}{lr} R(\cdot;\theta)&=I'R(\cdot;\theta'')+(1-I')R(\cdot;\theta''') \ R(\cdot;\theta')&=I'R(\cdot;\theta''')+(1-I')R(\cdot;\theta'') \end{array}This may be solveable with simplicity priors, but it’s not clear that that’s the case; forcibly injecting the human with heroine, for example, could be seen as modelling the human as an approximate opiode-receptor agonist maximser, which seems a lot simpler than the actual human.

Revealed meta-preferences

Finally, there is one element I haven’t addressed, namely the human’s first action a_0^\mathbf{H}, which is unspecified by \pi. It might be possible to use this as information to \mathbf{R} which would allow it to decide between \theta and \theta'. But for that to work, the human \mathbf{H} has to be aware of \mathbf{R}’s possible manipulation, and have enough bandwidth to communicate their preferences of \theta over \theta'. I’ll try and return to this issue in future posts.

Comment

https://www.lesswrong.com/posts/YHQZHbhx7afHJ5Esw/biased-reward-learning-in-cirl?commentId=tbPN9NTxNP3q6qcco

CIRL model might simply not be flexible enough to represent manipulative actions. The state s is known to both agents and is supposed to represent the world, but if \theta isn’t known to \mathbf R then the internal state of \mathbf H is not contained in s. Then there needs to be some other s^{\mathbf H} invisible to \mathbf R, and an extended transition function, which is able to affect this state.

https://www.lesswrong.com/posts/YHQZHbhx7afHJ5Esw/biased-reward-learning-in-cirl?commentId=jAuTWTirkcsom2YsD

My informal critique of CIRL is that it assume two untrue facts: that H knows θ (ie knows their own values) and that H is perfectly rational (or noisly rational in a specific way).

This seems like a valid critique. But do you see it as a deal breaker? In my mind, these are pretty minor failings of CIRL. Because if a person is being irrational and/​or can’t figure out what they want, then how can we expect the AI to? (Or is there some alternative scheme which handles these cases better than CIRL?)

(Update: Stuart replied to this comment on LessWrong: https://​​www.lesswrong.com/​​posts/​​YHQZHbhx7afHJ5Esw/​​biased-reward-learning-in-cirl?commentId=mtvrzJgNz5ngMMCvk)

Comment

https://www.lesswrong.com/posts/YHQZHbhx7afHJ5Esw/biased-reward-learning-in-cirl?commentId=mtvrzJgNz5ngMMCvk

I see this issue as being a fundamental problem:

https://​​www.lesswrong.com/​​posts/​​CSEdLLEkap2pubjof/​​research-agenda-v0-9-synthesising-a-human-s-preferences-into

https://​​www.lesswrong.com/​​posts/​​k54rgSg7GcjtXnMHX/​​model-splintering-moving-from-one-imperfect-model-to-another-1

https://​​www.lesswrong.com/​​posts/​​3e6pmovj6EJ729M2i/​​general-alignment-plus-human-values-or-alignment-via-human