This post is an appendix to "Infra-Bayesian physicalism: a formal theory of naturalized induction".
Proposition 2.16: Consider some \Gamma_{0}, \Gamma_{1}, \Phi and \Theta\in\HUC(\Gamma_{0}\times\Gamma_{1}\times\Phi). Define \pi:\el{\Gamma_{0}\times\Gamma_{1}}\rightarrow\el{\Gamma_{0}} by \pi(y,z,\alpha):=(y,{y'\in\Gamma_{0}\mid(y',z)\in\alpha}). Then, (\pi\times\Id{\Phi}){*}\B{\Gamma_{0}\times\Gamma_{1}}(\Theta)\subseteq\pr{\el{\Gam_0}\x\Phi}\B_{\Gamma_{0}}(\Theta)\subseteq\B_{\Gamma_{0}}(\pr{\Gamma_{0}\times\Phi}\Theta)
We can prove the second subset inclusion directly as a corollary of Proposition 2.10, just let the t of Proposition 2.10 be the projection function \Gam_1\x\Phi\to\Phi, so that just leaves the first subset inclusion direction. If you’ve seen the proofs so far, you know we do a thing where we try to show subset inclusion with expectations of functions and inequalities instead. And that the proofs all proceed by transforming the expectations until we get a maximum over contribution expectation values, and that’s always where the hard part of proving the inequalities shows up. So, let’s just get that part over with, an interested reader can work it out with previous proofs as a guide. Unusually, we’ll be keeping track of identity functions here.
Plugging in some f, and doing our usual activities to get every term into the appropriate form, we can get this result if we manage to show that \max_{\con'\in\B_{\Gam_0\x\Gam_1}(\Theta)}(\pi\x\Id{\Phi}){*}\con'(\la y_0 \alp_0 x.f(y_0,x,\alp_0)) \le\max{\con\in\B_{\Gam_0}(\Theta)}\pr{\el{\Gam_0}\x\Phi}\con(\la y_0 \alp_0 x.f(y_0,x,\alp_0)) So, to establish this, we’ll show that, given some \con'\in\B_{\Gam_0\x\Gam_1}(\Theta), we have (\pi\x\Id{\Gam_1\x\Phi}){*}(\con')\in\B{\Gam_0}(\Theta), and that \pr{\el{\Gam_0}\x\Phi}((\pi\x\Id{\Gam_1\x\Phi}){*}\con')=(\pi\x\Id{\Phi}){*}\con' because, if we show that, then it means that \B_{\Gam_0}(\Theta) is a rich enough set for the right-hand side of the equation to match anything the left-hand-side can put out.
First off, \pr{\el{\Gam_0}\x\Phi}((\pi\x\Id{\Gam_1\x\Phi}){*}\con')=(\pi\x\Id{\Phi}){*}\con' is pretty trivial to show. The only difference between the two processes is that the \Gam_1 coordinate of \con' is discarded immediately on the right-hand-side, and it’s preserved for one step and then discarded on the second step for the left-hand-side.
Now for our inequality of interest. Let \con'\in\B_{\Gam_0\x\Gam_1}(\Theta), and we’re trying to show that (\pi\x\Id{\Gam_1\x\Phi}){*}(\con')\in\B{\Gam_0}(\Theta) First off, showing the support condition for (\pi\x\Id{\Gam_1\x\Phi})_{*}(\con') which is somewhat nontrivial this time around. We start off with a guarantee that (y_0,y_1)\in\alp. This happens iff y_0\in{y'_0|(y'0,y_1)\in\alp}=\pi(y_0,y_1,\alp){2^{\Gam_0}} And so, we get that y_0\in\alp_0 is guaranteed for that pushforward, support condition established.
endofunction condition time. It’s important to remember that we want to treat \el{\Gam_0} as the computation side of things, and \Gam_1\x\Phi as the environment side of things, for our bridge transform we’re working with.s:\Gam_0\to\Gam_0 and g:\Gam_0\x\Gam_1\x\Phi\to[0,1]. Begin. (\pi\x\Id{\Gam_1\x\Phi}){*}\con'(\la y_0 y_1 \alp_0 x.\ind{s(y_0)\in\alp_0}g(s(y_0),y_1,x)) =\con'(\la y_0 y_1 \alp x.\ind{s(y_0)\in\pi(y_0,\alp,y_1){2^{\Gam_0}}}g(s(y_0),y_1,x)) Let’s unpack precisely what that set is. =\con'(\la y_0 y_1 \alp x.\ind{s(y_0)\in{y'_0|(y'_0,y_1)\in\alp}}g(s(y_0),y_1,x)) =\con'(\la y_0 y_1 \alp x.\ind{(s(y_0),y_1)\in\alp}g(s(y_0),y_1,x)) And we can rewrite the endofunction a little bit =\con'(\la y_0 y_1 \alp x.\ind{(s\x\Id{\Gam_1})(y_0,y_1)\in\alp}g((s\x\Id{\Gam_1})(y_0,y_1),x)) And finally apply our endofunction condition, since we’ve now got the function in a form that’s treating y_0,y_1 as part of the computational universe… \le\Theta(\la y_0 y_1 x.g(y_0,y_1,x)) And we’re done, this establishes our desired result. \blacksquare
Proposition 2.17: \B(\Theta) is a continuous function of \Theta.
The way this proof will work is by describing a composition of functions that makes \B(\Theta) from \Theta, and then showing that each of these functions is continuous, if \el{\Gam}\x\Phi is a finite set.
Claim: The bridge transform of some \Theta is equal to (using \ind{\el{\Gam}} to denote restricting an ultradistribution to the event y\in\alp and \ind{\el{\Gam}}^{-1} to denote the inverse of said function, mapping an ultradistribution on \el{\Gam} to the largest ultradistribution that could have produced it via restriction) \ind{\el{\Gam}}\left(\bigcap_{s:\Gam\to\Gam}s^{}(\ind{\el{\Gam}}^{-1}(\iota_{}(pr^{}(\Theta))))\right) Breaking down the unfamilar notation, the type of pr is \el{\Gam}\x\Phi\to\Gam\x\Phi, just the usual projection. That asterisk up top is pullback along that function. The type of \iota is \el{\Gam}\x\Phi\to\Gam\x 2^{\Gam}\x\Phi. And s^{} is pullback along the function \Gam\x 2^{\Gam}\x\Phi\to\Gam\x 2^{\Gam}\x\Phi given by (s,\Id{2^{\Gam}},\Id{\Phi}).
Let’s unpack the exact conditions that cause a \con to lie in the set \ind{\el{\Gam}}\left(\bigcap_{s:\Gam\to\Gam}s^{}(\ind{\el{\Gam}}^{-1}(\iota_{}(pr^{}(\Theta))))\right) First off, a \con is in this set iff it is supported over the event y\in\alp, and it lies in the set \bigcap_{s:\Gam\to\Gam}s^{}(\ind{\el{\Gam}}^{-1}(\iota_{}(pr^{}(\Theta)))) Which occurs iff \con is supported over the event y\in\alp, and for all s:\Gam\to\Gam, \con lies in the set s^{}(\ind{\el{\Gam}}^{-1}(\iota_{}(pr^{}(\Theta)))) Which occurs iff \con is suported over the event y\in\alp, and for all s:\Gam\to\Gam, s_{}(\con) lies in the set \ind{\el{\Gam}}^{-1}(\iota_{}(pr^{}(\Theta))) Which occurs iff \con is supported over the event y\in\alp, and for all s:\Gam\to\Gam, \ind{\el{\Gam}}(s_{}(\con)) lies in the set \iota_{}(pr^{*}(\Theta))
Now, \iota is just doing a little bit of type conversion, so we’re justified in ignoring it. Anways, the previous thing occurs iff \con is supported over the event y\in\alp, and for all s:\Gam\to\Gam, pr_{}(\ind{\el{\Gam}}(s_{}(\con)))\in\Theta.
Which happens iff \con is supported over the event y\in\alp and for all s:\Gam\to\Gam and g:\Gam\x\Phi\to[0,1], pr_{}(\ind{\el{\Gam}}(s_{}(\con)))(\la yx.g(y,x))\le\Theta(\la yx.g(y,x)) However, unpacking the left-hand side, we get pr_{}(\ind{\el{\Gam}}(s_{}(\con)))(\la yx.g(y,x)) =\ind{\el{\Gam}}(s_{}(\con))(\la y\alp x.g(y,x)) =s_{}(\con)(\la y\alp x.\ind{y\in\alp}g(y,x)) =\con(\la y\alp x.\ind{s(y)\in\alp}g(s(y),x)) Which is the exact condition for \con to lie in the bridge transform. So, we have an equivalence.
Now, since we’ve phrased the bridge transform as \ind{\el{\Gam}}\left(\bigcap_{s:\Gam\to\Gam}s^{}(\ind{\el{\Gam}}^{-1}(\iota_{}(pr^{*}(\Theta))))\right) We just need to establish that when all the sets are finite, then pullbacks are continuous, pushforwards are continuous, un-restrictions are continuous, intersections are continuous, and restrictions are continuous. Then, this would just be a particularly fancy continuous function, and accordingly, if \Theta_n limited to \Theta, then \B(\Theta_n) would limit to \B(\Theta).
Let’s establish that when the sets are finite, pullbacks are continuous. Let g:X\to Y, and Y and X be finite sets, and \psi\in\Box Y. Then, we have g^{}(\psi)(\la x.f(x)):=\psi(\la y.\max_{x\in g^{-1}(y)}f(x)) With the convention that maximizing over the empty set produces a value of 0. That is an alternate phrasing of pullback. We can then go \lim_{n\to\infty}d(g^{}(\psi_n),g^{}(\psi))=\lim_{n\to\infty}\sup_{f:X\to[0,1]}|g^{}(\psi_n)(f)-g^{*}(\psi)(f)| =\lim_{n\to\infty}\sup_{f:X\to[0,1]}|\psi_n(\la y.\max_{x\in g^{-1}(y)}f(x))-\psi(\la y.\max_{x\in g^{-1}(y)}f(x))| \le\lim_{n\to\infty}\sup_{h:Y\to[0,1]}|\psi_n(h)-\psi(h)|=\lim_{n\to\infty}d(\psi_n,\psi)=0 Admittedly, this isn’t quite what our usual modified KR metric usually looks like. The reason we can do this is because we’re just dealing with functions in [0,1], so the norm part of the modified KR metric doesn’t matter, and since our sets are finite, we can say that all points are distance 1 from each other, so all functions are 1-Lipschitz, and then the two metrics coincide. So, pullback along any function is continuous.
For pushforward, it’s easy because, if \psi\in\Box X, then we’ve got \lim_{n\to\infty}d(g_{}(\psi_n),g_{}(\psi))=\lim_{n\to\infty}\sup_{h:Y\to[0,1]}|g_{}(\psi_n)(h)-g_{}(\psi)(h)| =\lim_{n\to\infty}\sup_{h:Y\to[0,1]}|\psi_n(\la x.h(g(x)))-\psi(\la x.h(g(x)))| \le\lim_{n\to\infty}\sup_{f:X\to[0,1]}|\psi_n(f)-\psi(f)|=\lim_{n\to\infty}d(\psi_n,\psi)=0 For showing restrictions continuous, for the set E\subseteq X that we’re updating on, \lim_{n\to\infty}d(\ind{E}(\psi_n),\ind{E}(\psi))=\lim_{n\to\infty}\sup_{f:X\to[0,1]}|\ind{E}(\psi_n)(f)-\ind{E}(\psi)(f)| =\lim_{n\to\infty}\sup_{f:X\to[0,1]}|\psi_n(\ind{x\in E}f(x))-\psi_n(\ind{x\in E}f(x))| \le\lim_{n\to\infty}\sup_{f:X\to[0,1]}|\psi_n(f)-\psi(f)|=\lim_{n\to\infty}d(\psi_n,\psi)=0 For intersections… that will take a bit more work. We’ll have to use the equivalent formulation of closeness, that \psi_n limits to \psi iff the Hausdorff distance between the corresponding sets (according to the generalized KR measure) limits to 0. So, our task is to assume that \psi_n limits to \psi, and \phi_n limits to \phi, and show that \psi_n\cap\phi_n limits to \psi\cap\phi. The bound we’ll manage to prove is that d(\psi_n\cap\phi_n,\psi\cap\phi)\le|X|\max(d(\psi_n,\psi),d(\phi_n,\phi)) Where |X| is the number of elements in the finite set X. Here’s the basic argument. For any particular point in the set \psi_n, there’s a nearby point in \psi (since the Hausdorff distance is low) with only \eps measure moved around or deleted. So, in particular, if all the measure moved or deleted was just deleted from \psi_n instead, then that resulting contribution would be below the nearby contribution in \psi that we picked, and so it would lie in \psi as well due to downwards closure.
So, in particular, if \psi_n and \psi only have a Hausdorff distance of \eps, then, taking any contribution in \psi_n and subtracting \eps measure from \emph{all points} (if possible, if not, just remove measure till you’re at 0) is \emph{guaranteed} to make a point in \psi, and vice-versa.
And a corollary of that is that, given any contribution in \psi_n\cap\phi_n, the "subtract \max(d(\psi_n,\psi),d(\phi_n,\phi)) measure from each point" contribution is in \psi, also in \phi, and at a maximum distance of |X|\max(d(\psi_n,\psi),d(\phi_n,\phi)) from the original contribution. And this argument can be reversed to show that the limit of the intersections is the intersection of the limits (because hausdorff distance between the two goes to 0), so we do in fact have intersection being continuous.
And that just leaves un-restricting. Again, this will take a Hausdorff-distance argument. Fixing some contribution in \ind{E}^{-1}(\psi_n), it can be broken down into an on-E part \con_{n,E}, and an off-E part \con_{n,\neg E}. When you restrict to E, then \con_{n,E}\in\psi_n. Since \psi_n is within \eps of \psi, there’s some \con_{E}\in\psi that’s within \eps of \con_{n,E}. Then, let your point in \ind{E}^{-1}(\psi) be \con_{E}+\con_{n,\neg E} (if there’s slightly more than 1 measure there, delete \eps measure from \con_{n,\neg E}, or all the measure if there’s less than \eps present). It’s close to \con_{n,E}+\con_{n,\neg E} because \con_{n,E} is close to \con_E, the other component of it is unchanged, and maybe we deleted a little bit of excess measure which didn’t do much.
This line of argument shows that \psi_n being close to the limit \psi is sufficient to establish that the un-restriction of the two of them are comparably close together. So we have continuity for that, which is the last thing we needed.
Since we wrote the bridge transform as a sequence of continuous functions, we know it’s continuous (as long as all the involved sets are finite) \blacksquare
Proposition 3.1: Let X be a finite poset, f:X\rightarrow\R and \Theta\in\HUC X downward closed. Define f^{\max}:X\rightarrow\R by f^{\max}(x):=\max_{y\leq x}f(y). Observe that f^{\max} is always non-decreasing. Then, \Theta(f)=\Theta(f^{\max}).
Proof: Pick a \con'\in\Theta s.t. \con'(f^{\max})=\Theta(f^{\max}). Ie, a maximizing contribution. Let k:X\to X be defined as k:=\la x.\text{argmax}{y\le x}f(y). Ie, it moves a point down to somewhere below it where it can attain the highest value according to f. Now, consider k{*}(\con'). It’s present in \Theta because \Theta was, by assumption, downwards closed, and we just moved all the measure down.
Now, we have \Theta(f)=\max_{\con\in\Theta}\con(f)\ge k_{}(\con')(f)=\con'(\la x.f(k(x)))=\con'(\la x.f(\text{argmax}{y\le x}f(y))) =\con'(\la x.\max{y\le x}f(y))=\con'(f^{\max})=\Theta(f^{\max})\ge\Theta(f) And so, all inequalities must be equalities, proving that \Theta(f^{\max})\ge\Theta(f). In order, the connectives were: unpacking definitions, using downward closure to conclude that k_{}(\con')\in\Theta, unpacking pushforwards, unpacking the definition of k, using that applying a function to the argmax of inputs to that function just makes the max of the function, folding the definition of f^{\max} back up, using that \con' was selected to maximize f^{\max}, and applying monotonicity. Done! \blacksquare
Proposition 4.1: Consider some \Gamma, \Phi, a relation Q\subseteq\Gamma\times\Phi and a PUCK \Xi over Q. Let \Theta:=\top_{\Gamma}\ltimes\Xi. Then, \B(\Theta)=[\top_{\Gamma}\ltimes(\Sus{\Theta}\rtimes\Xi)]^{\downarrow}=[\top_{\Gamma}\ltimes(Q^{-1}\rtimes\Xi)]^{\downarrow}
First off, I’m not terribly picky about variable ordering, so I’ll just write our desired proof target as \B(\Theta)=[\top_{\Gamma}\sdp\Xi\sdp\Sus{\Theta}]^{\downarrow}=[\top_{\Gamma}\sdp\Xi\sdp Q^{-1}]^{\downarrow} The way we’ll do this is by establishing the following result. For all monotone f':\el{\Gam}\x\Phi\to[0,1], we have \B(\Theta)(f')\le\top_{\Gamma}\sdp\Xi\sdp\Sus{\Theta}\le\top_{\Gamma}\sdp\Xi\sdp Q^{-1}\le\B(\Theta)(f') Why does that suffice? Well, assume hypothetically that the result held. Since the inequalities go in a circle, we have equality for all monotone functions. And then, for some non-monotone function f, we can go \B(\Theta)(f)=\B(\Theta)(f^{\max})=\top_{\Gamma}\sdp\Xi\sdp\Sus{\Theta} =[\top_{\Gamma}\sdp\Xi\sdp\Sus{\Theta}]^{\downarrow}(f^{\max})=[\top_{\Gamma}\sdp\Xi\sdp\Sus{\Theta}]^{\downarrow}(f) and swap out \Sus{\Theta} for Q^{-1} to show the other equality, and then we’d have equality of the three ultradistributions on all functions, so they’re equal.
For the equalities in the above equation, the first one arose because of Proposition 2.4 (bridge transforms are always downwards closed) and Proposition 3.1 (downwards-closed things let you swap out f for f^{\max} and it doesn’t affect the value). The second equality arose because f^{\max} is a monotone function and by assumption, we have equality for monotone functions. The third equality would arise because taking the downwards closure doesn’t affect the expectation value of monotone functions. If you add a bunch of contributions made by measure flowing down, that’s just strictly worse from the perspective of a monotone function and doesn’t change expectation value. And the fourth equality arises from Proposition 3.1 again.
So, we just need to prove the following three inequalities, for monotone functions f. \B(\Theta)(f)\le[\top_{\Gam}\sdp\Xi\sdp\Sus{\Theta}])(f)\le\top_{\Gam}\sdp\Xi\sdp Q^{-1}\le\B(\Theta)(f) The first one is easily addressable by Proposition 2.7. By proposition 2.7 and the definition of \Theta, we have \B(\Theta)\subseteq(\Theta\sdp\Sus{\Theta})^{\downarrow}=[\top_{\Gam}\sdp\Xi\sdp\Sus{\Theta}]^{\downarrow} And so, for monotone functions f, we have \B(\Theta)(f)\le[\top_{\Gam}\sdp\Xi\sdp\Sus{\Theta}])(f) Done.
Now to show our second inequality. (\top_{\Gam}\sdp\Xi\sdp\Sus{\Theta})(\la y\alp x.f(y,x,\alp)) =(\top_{\Gam}\sdp\Xi)(\la yx.\delta_{\Sus{\Theta}(x)}(\la\alp.f(y,x,\alp))) =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,\Sus{\Theta}(x))) Unpack the definition of the set =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,{y'|(y',x)\in\Sup\Theta})) Unpack the definition of \Theta =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,{y'|(y',x)\in\Sup\top_{\Gam}\sdp\Xi})) The condition (y',x)\in\Sup\top_{\Gam}\sdp\Xi is equivalent to x\in\Sup\Xi(y'). After all, if x\in\Sup\Xi(y'), the distribution \delta_{y'} lies in \top_{\Gam}, so \delta_{y'}\sdp\Xi would certify that (y',x)\in\Sup\top_{\Gam}\sdp\Xi. And if x\not\in\Sup\Xi(y'), then no matter the distribution in \top_{\Gam} or kernel selected from \Xi, if y' gets picked, then the kernel selected from \Xi isn’t going to be making x along with it. Since we have that iff characterization, we have =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,{y'|x\in\Sup\Xi(y')})) \Xi(y') is the union of a bunch of k(y') for k\in\Pi (and convex hull), so its support is equal to the union of the supports for the k(y'). =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,{y'|x\in\bigcup_{k\in\Pi}\Sup k(y')})) Then, since each k is a PoCK over Q, k(y') is the restriction of some measure \varpi^k to the set Q(y), which will be written as \ind{Q(y')}\varpi^k. =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,{y'|x\in\bigcup_{k\in\Pi}\Sup(\ind{Q(y')}\varpi^k)})) And now we’re about to get an inequality.f is monotone, so making the associated set bigger (easier to fulfill the defining condition) should always increase the value of f, and by monotonicity, increase the expectation value, so we get \le(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,{y'|x\in Q(y')})) Then restate =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,{y'|(x,y')\in Q})) =(\top_{\Gam}\sdp\Xi)(\la yx.f(y,x,Q^{-1}(x))) And pack back up as a semidirect product. =(\top_{\Gam}\sdp\Xi)(\la yx.\delta_{Q^{-1}(x)}(\la\alp.f(y,x,\alp))) =(\top_{\Gam}\sdp\Xi\sdp Q^{-1})(\la y\alp x.f(y,x,\alp)) And we have our second \le inequality established!
Now, onto the third inequality. (\top_{\Gam}\sdp\Xi\sdp Q^{-1})(\la y\alp x.f(y,x,\alp)) Unpack the semidirect products =\top_{\Gam}(\la y.\Xi(y)(\la x.\delta_{Q^{-1}(x)}(\la\alp.f(y,x,\alp)))) And what top means =\max_{y\in\Gam}\Xi(y)(\la x.\delta_{Q^{-1}(x)}(\la\alp.f(y,x,\alp))) And as for \Xi… well, each \Xi(y) is the convex hull of the various k(y), for k\in\Pi. So, the expectation for \Xi(y) is the maximum expectation for the various k(y), so we can rewrite as =\max_{y\in\Gam}\max_{k\in\Pi}k(y)(\la x.\delta_{Q^{-1}(x)}(\la\alp.f(y,x,\alp))) Pick a particular y^{} and k^{} that attain the maximal value =k^{}(y^{})(\la x.\delta_{Q^{-1}(x)}(\la\alp.f(y^,x,\alp))) Reexpress a little bit =\delta_{y^}(\la y.k^{}(y)(\la x.\delta_{Q^{-1}(x)}(\la\alp.f(y,x,\alp))) And pack this back up as a semidirect product =(\delta_{y^}\sdp k^{}\sdp Q^{-1})(\la y\alp x.f(y,x,\alp)) And then we’ll be showing that this contribution lies in \B(\Theta). Once we’ve done that, we can go \le\max_{\con'\in\B(\Theta)}\con'(\la y\alp x.f(y,x,\alp)) =\B(\Theta)(\la y\alp x.f(y,x,\alp)) And we’d be done, having proven the third inequality and the last one to finish up the proof. So, now our proof target switches to showing that (\delta_{y^}\sdp k^{*}\sdp Q^{-1})\in\B(\Theta). We can show this if we show the support condition and the endofunction condition.
For the support condition, we have (\delta_{y^}\sdp k^{}\sdp Q^{-1})(\la y\alp x.\ind{y\not\in\alp}) =\delta_{y^}(\la y.k^{}(y)(\la x.\delta_{Q^{-1}(x)}(\la\alp.\ind{y\not\in\alp}))) =\delta_{y^}(\la y.k^{}(y)(\la x.\ind{y\not\in Q^{-1}(x)})) =k^{}(y^{})(\la x.\ind{y^{}\not\in Q^{-1}(x)}) And then we use that the k^{}(y^{}) are all of the form "take this measure, restrict it to Q(y^{})", to get =(\ind{Q(y^)}\varpi^{k^{}})(\la x.\ind{y^{}\not\in Q^{-1}(x)}) =\varpi^{k^{}}(\la x.\ind{x\in Q(y^)}\ind{y^{}\not\in Q^{-1}(x)}) Unpacking the definitions, we get =\varpi^{k^{}}(\la x.\ind{(x,y^{})\in Q}\ind{(x,y^{*})\not\in Q})=0 And so, this contribution is indeed supported on (y,\alp) pairs s.t. y\in\alp.
Now for the endofunction condition. As usual, fix an s and a g. (\delta_{y^}\sdp k^{}\sdp Q^{-1})(\la y\alp x.\ind{s(y)\in\alp}g(s(y),x)) Unpack the semidirect product =\delta_{y^}(\la y.k^{}(y)(\la x.\delta_{Q^{-1}(x)}(\la\alp.\ind{s(y)\in\alp}g(s(y),x)))) Plug in the dirac-deltas =k^{}(y^)(\la x.\ind{s(y^)\in Q^{-1}(x)}g(s(y^),x)) Reexpress the set membership criterion a bit =k^{}(y^)(\la x.\ind{x\in Q(s(y^))}g(s(y^),x)) And the contribution at the start =(\ind{Q(y^)}\varpi^{k^{}})(\la x.\ind{x\in Q(s(y^))}g(s(y^),x)) Distribute it in as an indicator function. =\varpi^{k^{}}(\la x.\ind{x\in Q(y^)}\ind{x\in Q(s(y^))}g(s(y^),x)) Pull the other indicator function out. =(\ind{Q(s(y^))}\varpi^{k^{}})(\la x.\ind{x\in Q(y^)}g(s(y^),x)) Rewrite with k^{} =k^{}(s(y^))(\la x.\ind{x\in Q(y^)}g(s(y^),x)) Use an inequality to get rid of the indicator function \le k^{}(s(y^))(\la x.g(s(y^),x)) Rewrite it a bit =\delta_{s(y^)}(\la y.k^{}(y)(\la x.g(y,x))) Swap out k^{}(y) for \Xi(y), the latter is larger \le\delta_{s(y^)}(\la y.\Xi(y)(\la x.g(y,x))) Swap out \delta_{s(y^*)} for \top_{\Gam}, the latter is larger \le\top_{\Gam}(\la y.\Xi(y)(\la x.g(y,x))) =(\top_{\Gam}\sdp\Xi)(\la yx.g(y,x)) Abbreviate =\Theta(\la yx.g(y,x)) And bam, endofunction condition is shown, the entire proof goes through now. \blacksquare
Corollary 4.3: Suppose that for any d\in\D and \pi:\Hi\rightarrow\A s.t. d\in\Sup W(\pi), it holds that d\C\pi. That is, the observations W predicts to receive from the computer are consistent with the chosen policy. Let L:\D\rightarrow\R be a Cartesian loss function and \pi:\Hi\rightarrow\A a policy. Then, (\pr{\el{\Gamma}}\B(\Theta_{W})\cap\Co{\pi}{\text{fair}})(\Ph)=W(\pi;L)
I’m going to be proceeding very cautiously here. First off, make our \pi value visually distinct by swapping it out for \pi^{} (\pr{\el{\Gamma}}\B(\Theta_{W})\cap\Co{\pi^}{\text{fair}})(\Ph) Now, by the identifications we made earlier, we can identify \Gam with \A^\Hi, the space of policies. Using that to unpack the function a little bit, we have =(\pr{\el{\Gamma}}\B(\Theta_{W})\cap\Co{\pi^}{\text{fair}})(\la\pi\alp.\Ph(\pi,\alp)) Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack \Co{\pi^}{\text{fair}}, we get =(\pr{\el{\Gamma}}\B(\Theta_{W}))(\la\pi\alp.\ind{\forall h\in\Hi_{\pi,\alp}:G^{\pi}(h)=\pi^{}(h)}\Ph(\pi,\alp)) Apply that G^{\pi}(h) is "what would the agent do on h if the agent is copying the behavior of \pi", so we can rephrase as: =(\pr{\el{\Gamma}}\B(\Theta_{W}))(\la\pi\alp.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)}\Ph(\pi,\alp)) Pull off the projection, and use d for a destiny in \D. =\B(\Theta_{W})(\la\pi\alp d.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)}\Ph(\pi,\alp)) At this point, we use that \Theta_{W}:=\top_{\Gam}\sdp W, and that W is a PUCK over Q_0 and Proposition 4.1 to go =[\top_{\Gamma}\sdp W\sdp Q_0^{-1}]^{\downarrow}(\la\pi\alp d.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)}\Ph(\pi,\alp)) Before we can remove the downwards closure, we’ll want to verify the function is monotone. So, we’ll want to start unpacking the physicalist loss next. Applying definition 3.1, and using d' instead of g to remember it’s a destiny, we have =[\top_{\Gamma}\sdp W\sdp Q_0^{-1}]^{\downarrow}(\la\pi\alp d.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)}\min_{ha:ha\in\X{\pi,\alp}}\max_{d':ha\sqsubseteq d'}L(d')) Next up is unpacking \X{\pi,\alp}. Using definition 3.1, it’s =[\top_{\Gamma}\sdp W\sdp Q_0^{-1}]^{\downarrow}(\la\pi\alp d.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,\alp}\x\A\wedge(\forall\pi'\in\alp:G^{\pi'}(h)=a)}\max_{d'':ha\sqsubseteq d'}L(d')) At this point, we can, again, treat G^{\pi'}(h) the same as \pi'(h). =[\top_{\Gamma}\sdp W\sdp Q_0^{-1}]^{\downarrow}(\la\pi\alp d.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,\alp}\x\A\wedge(\forall\pi'\in\alp:\pi'(h)=a)}\max_{d':ha\sqsubseteq d''}L(d')) And now we need to take a moment to show that \Hi_{\pi,\alp} gets smaller when \alp gets larger. Applying definition 1.5, the event h\in\Hi_{\pi,\alp} unpacks as (\forall h'a\sqsubset h,\pi'\in\alp:G^{\pi'}(h')=a)\wedge(\exists d':h\sqsubset d'\wedge d'\C\pi) Now, if \alp becomes a larger set, then it gets harder for the first condition to be fulfilled, so the set \Hi_{\pi,\alp} shrinks. Now, since this happens, it means that if \alp gets bigger, it gets more difficult for the prerequisite of the implication in the indicator function to be fulfilled, so the implication is more likely to hold. Further, the minimization is taking place over a smaller set, so the loss goes up. So our function is monotone in \alp, and we can remove the downwards closure. =(\top_{\Gamma}\sdp W\sdp Q_0^{-1})(\la\pi\alp d.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,\alp}\x\A\wedge(\forall\pi'\in\alp:\pi'(h)=a)}\max_{d':ha\sqsubseteq d''}L(d')) Unpacking the semidirect product, it is =\top_{\Gamma}(\la\pi.W(\pi)(\la d.\delta_{Q_{0}^{-1}(d)}(\la\alp.\ind{\forall h\in\Hi_{\pi,\alp}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,\alp}\x\A\wedge(\forall\pi'\in\alp:\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')))) Substituting in the dirac-delta everywhere that \alp is, we get =\top_{\Gamma}(\la\pi.W(\pi)(\la d.\ind{\forall h\in\Hi_{\pi,Q_{0}^{-1}(d)}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,Q_{0}^{-1}(d)}\x\A\wedge(\forall\pi'\in Q_{0}^{-1}(d):\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d'))) Now, Q_0^{-1}(d) is the set of policies \pi' s.t. \pi'Q_0 d. The "this policy is consistent with this destiny" relation. Also let’s swap out \top_{\Gamma} for maximization =\max_{\pi}W(\pi)(\la d.\ind{\forall h\in\Hi_{\pi,Q_0^{-1}(d)}:\pi(h)=\pi^{*}(h)} \min_{ha:ha\in\Hi_{\pi,Q_0^{-1}(d)}\x\A\wedge(\forall\pi'Q_0 d:\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')) Now, we’re going to try to address that minimum, and show that the only ha that fulfill the conditions are exactly those ha\sqsubseteq d. This requires showing that ha\sqsubseteq d is a sufficient condition to fulfill the relevant properties, and then to show that ha\not\sqsubseteq d implies a failure of one of the properties.
So, first up. Assume ha\sqsubseteq d. Then, for any \pi', dQ_0\pi' and ha\sqsubseteq d \emph{must} imply that \pi'(h)=a, that’s what policy consistency means. Also, h\in\Hi_{\pi,Q_0^{-1}(d)} unpacks as the two conditions \forall h'a',\pi':h'a'\sqsubset h\wedge dQ_0\pi'\to\pi'(h')=a' \exists d':h\sqsubset d'\wedge d'\C\pi As for the first condition,clearly, if \pi' is consistent with d, it’s consistent with ha because ha\sqsubseteq d, and so it must be consistent with any prefix of ha, so the first condition holds.
For the second condition, d is a valid choice, because we assumed ha\sqsubseteq d, and d\C\pi occurs always, because W(\pi) always being supported on d s.t.d\C\pi was one of our problem assumptions.
So, we have one implication direction down. Now for the reverse implication direction. Assume ha\not\sqsubseteq d. Then there are two possibilities. the first possibility is that ha first diverges from d on an observation. The second possibility is that ha first diverges from d on an action.
For the first possibility, it’s possible to make two policies which are consistent with d but also differ in their actions on history h, because h isn’t a prefix of d if ha first differs from d on an observation.
For the second possibility, it’s ruled out by either the condition for h\in\Hi_{\pi,Q_0^{-1}(d)} that goes \forall h'a',\pi':h'a'\sqsubset h\wedge\pi'Q_0 d\to\pi'(h')=a' or the extra condition that \forall\pi':\pi'Q_0 d\to\pi'(h)=a applied to the first a-history prefix that deviates from d, because \pi' Q_0 d implies that \pi'(h') must be the action which d dictates, not the action a' that deviates from d.
And that establishes the other direction of the iff statement.
Thus, we can swap out our fancy minimization with just minimizing over the ha\sqsubseteq d. =\max_{\pi}W(\pi)(\la d.\ind{\forall h\in\Hi_{\pi,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)} \min_{ha:ha\sqsubseteq d}\max_{d':ha\sqsubseteq d'}L(d')) This minimization is attained by selecting d itself. So then it turns into =\max_{\pi}W(\pi)(\la d.\ind{\forall h\in\Hi_{\pi,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)) At this point, what we’ll do is show that an upper bound and lower bound on the value of this term is the same. Going from upper bound to lower bound, it’s starting out with W(\pi^{})(\la d.L(d)) At this point, we’ll use that W is a PUCK, so there’s a set E of environments e (PoCK’s) that W is generated from, so we can go: =\max_{e\in E}e(\pi^)(\la d.L(d)) =\max_{\pi}\max_{e\in E}e(\pi^)(\la d.\ind{dQ_0\pi }L(d)) =\max_{\pi}\max_{e\in E}(\ind{Q_0(\pi^)}\varpi^{e})(\la d.\ind{dQ_0\pi}L(d)) =\max_{\pi}\max_{e\in E}\varpi^{e}(\la d.\ind{dQ_0\pi^}\ind{dQ_0\pi}L(d)) Now pull the indicator function back out. =\max_{\pi}\max_{e\in E}(\ind{Q_0(\pi)}\varpi^{e})(\la d.\ind{dQ_0\pi^}L(d)) =\max_{\pi}\max_{e\in E}e(\pi)(\la d.\ind{dQ_0\pi^}L(d)) =\max_{\pi}W(\pi)(\la d.\ind{dQ_0\pi^}L(d)) Now we must show that this is a looser constraint than what was previously in our indicator function to proceed further. So our next order of business is showing that, certainly, \forall h\in\Hi_{\pi,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)\to dQ_0\pi^ Let h be one of the history prefixes of some d in the support of W(\pi). The two conditions for h\in\Hi_{\pi,Q_0^{-1}(d)} are fulfilled, because they are \forall h',a',\pi':h'a'\sqsubset h\wedge dQ_0\pi'\to\pi'(h')=a' \exists d':h\sqsubset d'\wedge d'\C\pi For the first condition, if h'a'\sqsubset h, then h'a'\sqsubset d, and so if \pi' is consistent with d, it must take the same action in response to h', the action that d commands, a'. So that’s fulfilled. For the second condition, let d' be d. h\sqsubset d holds, and so d\C\pi holds certainly, because W(\pi) is supported on d s.t. d\C\pi.
So, for all d in the support of W(\pi), h\sqsubset d\to h\in\Hi_{\pi,Q_0^{-1}(d)}. Since we assumed our forall statement as prerequisite, this means that for all h\sqsubset d, \pi(h)=\pi^{}(h). And dQ_0\pi means \forall ha\sqsubseteq d:\pi(h)=a. Since \pi^{}(h) mimics \pi(h) for all history prefixes of d, this means \forall ha\sqsubseteq d:\pi^{}(h)=a, ie dQ_0\pi^{}.
So, since this is a looser constraint, when we were previously at =\max_{\pi}W(\pi)(\la d.\ind{dQ_0\pi^}L(d)) we can proceed further to \ge\max_{\pi}W(\pi)(\la d.\ind{\forall h\in\Hi_{\pi,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)) Which is our value we’re trying to sandwich. Now, at this point, plug in \pi^* and get \ge W(\pi^)(\la d.\ind{\forall h\in\Hi_{\pi^{},Q_0^{-1}(d)}:\pi^{}(h)=\pi^{}(h)}L(d)) =W(\pi^)(\la d.L(d)) And bam, we’ve sandwiched our term between W(\pi^)(L) on both sides, and so the result follows. \blacksquare
Proposition 4.2: Let X, Y and Z be finite sets, Q\subseteq Y\times X a relation, \kappa:Y\rightarrow\Con (X\times Z) a Z-PoCK over Q and \Theta\in\HUC Y. Then, there exist \mu\in\Con Z and \phi:Z\times Y\rightarrow\Con X s.t. for all z, \la y.\phi(z,y) is a PoCK over Q s.t. \kappa(y)=\mu\ltimes(\la z.\phi(z,y)). Moreover, suppose that (\mu_{1},\phi_{1}) and (\mu_{2},\phi_{2}) are both as above. Then, \mu_{1}\ltimes\Theta\ltimes\phi_{1}=\mu_{2}\ltimes\Theta\ltimes\phi_{2}
Our first order of business is establishing that there’s even a \mu and \phi that has those effects at all. Here’s a way to define them. \mu(z):=\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x') Where \varpi^{\kappa} is the measure on Z\x X that \kappa is associated with, ie, \kappa(y)=\ind{Q(y)}\varpi^{\kappa} must be true for some \varpi^{\kappa} because \kappa is a Z-PoCK over Q. And, \phi will be defined as: \phi(y,z)(x):=\frac{\ind{x\in Q(y)}\varpi^{\kappa}(z,x)}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')} With those definitions in place, it’s easy to establish that \mu\sdp(\la z.\phi(y,z))=\kappa(y). We can just fix an arbitrary x,z pair and go \kappa(y)(x,z)=\ind{x\in Q(y)}\varpi^{\kappa}(z,x)=\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')\cdot\frac{\ind{x\in Q(y)}\varpi^{\kappa}(z,x)}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')} =\mu(z)\cdot\phi(y,z)(x)=(\mu\sdp(\la z.\phi(y,z)))(x,z) And we’re done with showing that such functions exist in the first place. Well, as long as we check that \mu and \phi behave accordingly. First off, \mu being a contribution follows from \varpi^{\kappa} being a Z-polycontribution, and the definition of Z-polycontributions. Also, to show that (\la y.\phi(y,z)) is a PoCK over Q, we need to show that there’s a \varpi^{\phi,z} s.t. \phi(y,z)=\ind{Q(y)}\varpi^{\phi,z}, and that always has 1 or less measure.
In order to do this, define \varpi^{\phi,z}:=\frac{1}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')}pr_{X}(\ind{{z}\x X}\varpi^{\kappa}) Clearly, you get \phi(y,z) from restricting this to Q(y), because we have (\ind{Q(y)}\varpi^{\phi,z})(x)=\frac{1}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')}\ind{Q(y)}(pr_{X}(\ind{{z}\x X}\varpi^{\kappa}))(x) =\frac{\ind{Q(y)}(pr_{X}(\ind{{z}\x X}\varpi^{\kappa}))(x)}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')}=\frac{\ind{x\in Q(y)}pr_{X}(\ind{{z}\x X}\varpi^{\kappa})(x)}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')} =\frac{\ind{x\in Q(y)}\sum_{z'}(\ind{{z}\x X}\varpi^{\kappa})(x,z')}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')}=\frac{\ind{x\in Q(y)}\sum_{z'}\ind{z'=z}\varpi^{\kappa}(x,z')}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')} =\frac{\ind{x\in Q(y)}\varpi^{\kappa}(x,z)}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')}=\phi(y,z)(x) And we’re done. And also, the measure is \le 1, because \sum_{x\in Q(y)}\varpi^{\phi,z}(x)=\sum_{x\in Q(y)}\frac{pr_{X}(\ind{{z}\x X}\varpi^{\kappa})(x)}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x')} and, skipping over a few routine steps, =\frac{\sum_{x\in Q(y)}\varpi^{\kappa}(x,z)}{\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(x',z)} \le\frac{\sum_{x\in Q(y)}\varpi^{\kappa}(x,z)}{\sum_{x'\in Q(y)}\varpi^{\kappa}(x',z)}=1 And we’re done, we figured out how to decompose \kappa into \mu and \phi.
Now for the second half of the proof. The first thing to establish is that, for all y,z, we have \mu_1(z)\cdot\phi_1(y,z)=\mu_2(z)\cdot\phi_2(y,z). This occurs because, for all x, \mu_1(z)\cdot\phi_1(y,z)(x)=(\mu_1\sdp(\la z.\phi_1(y,z)))(x,z)=\kappa(y)(x,z) And then by symmetry, the exact same holds for \mu_2 and \phi_2, both were declared to be equal to \kappa. Now that this result is in place, we can begin. (\mu_{1}\ltimes\Theta\ltimes\phi_{1})(\la xyz.f(x,y,z)) =\mu_1(\la z.\Theta(\la y.\phi_1(y,z)(\la x.f(x,y,z)))) Now, we do something odd. We can reexpress this as =\mathfrak{C}(\la z.\mu_1(z)\cdot\Theta(\la y.\phi_1(y)(\la x.f(x,y,z)))) Basically, what’s going on here is that we can swap out the contribution \mu_1 for the counting measure \mathfrak{C} (1 measure on each distinct point) and just scale down the expectation values accordingly. It’s pretty much the same way that you can think of \sum_{x}\mu(x)f(x) (expectation of f w.r.t \mu) as \sum_{x}1\cdot\mu(x)f(x) (expectation of \mu\cdot f w.r.t the counting measure). Now, since \Theta is homogenous, we can move constants in or out of it, to get =\mathfrak{C}(\la z.\Theta(\la y.\mu_1(z)\cdot\phi_1(y,z)(\la x.f(x,y,z)))) Now, at this point, we can use that \mu_1(z)\cdot\phi_1(y,z)=\mu_2(z)\cdot\phi_2(y,z), to get =\mathfrak{C}(\la z.\Theta(\la y.\mu_2(z)\cdot\phi_2(y,z)(\la x.f(x,y,z)))) And just back up and reverse everything. =\mathfrak{C}(\la z.\mu_2(z)\Theta(\la y.\phi_2(y,z)(\la x.f(x,y,z)))) =\mu_2(\la z.\Theta(\la y.\phi_2(y)(\la x.f(x,y,z)))) =(\mu_2\sdp\Theta\sdp\phi_2)(\la xyz.f(x,y,z)) And we’re done! \blacksquare
Lemma 4: Let X, Y and Z be finite sets, Q\subseteq Y\times X a relation, \Xi_{1},\Xi_{2}:Y\rightarrow\HUC(X\times Z) Z-PUCKs over Q, \Theta\in\HUC Y and p\in[0,1]. Then, p\Xi_{1}+(1-p)\Xi_{2} is also a Z-PUCK over Q, and \Theta*(p\Xi_{1}+(1-p)\Xi_{2})\subseteq p(\Theta*\Xi_{1})+(1-p)(\Theta*\Xi_{2})
Our first order of business is establishing that the mix of Z-PUCK’s over Q is a Z-PUCK over Q. Here’s what we’ll do. We’ll define a family of kernels, show that they’re all Z-PoCK’s, and that said family makes a Z-PUCK that’s equal to the mix of \Xi_1 and \Xi_2.
So, let \Pi_1 be the set of Z-PoCK’s associated with \Xi_1, and \Pi_2 be the set of Z-PoCK’s associated with \Xi_2. Elements of these sets are \kappa_1 and \kappa_2. Define \Pi as {p\kappa_1+(1-p)\kappa_2|\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}.
By Definition 4.5, in order to establish that these are Z-PoCK’s over Q, we need to make an appropriate choice of \varpi. In particular, the \varpi associated with \kappa=p\kappa_1+(1-p)\kappa_2 is \varpi^{\kappa}:=p\varpi^{\kappa_1}+(1-p)\varpi^{\kappa_2}. It fufills definition 4.5 because \kappa(y)(x,z)=(p\kappa_1+(1-p)\kappa_2)(y)(x,z)=p\kappa_1(y)(x,z)+(1-p)\kappa_2(y)(x,z) =p(\ind{Q(y)}\varpi^{\kappa_1})(x,z)+(1-p)(\ind{Q(y)}\varpi^{\kappa_2})(x,z)=(p\ind{Q(y)}\varpi^{\kappa_1}+(1-p)\ind{Q(y)}\varpi^{\kappa_2})(x,z) =\ind{Q(y)}(p\varpi^{\kappa_1}+(1-p)\varpi^{\kappa_2})(x,z)=\ind{Q(y)}\varpi^{\kappa}(x,z) By unpacking our definition, using how mixes of kernels work, applying definition 4.5 for \kappa_1 and \kappa_2, and then just doing some simple regrouping and packing the definition back up, we get our result.
But wait, we still need to show that \varpi^{\kappa} is a Z-Polycontribution on Q. Again, this isn’t too hard to show, with Definition 4.4. \sum_{z\in Z}\max_{y\in Y}\sum_{x\in Q(y)}\varpi^{\kappa}(x,z)=\sum_{z\in Z}\max_{y\in Y}\sum_{x\in Q(y)}(p\varpi^{\kappa_1}+(1-p)\varpi^{\kappa_2})(x,z) =\sum_{z\in Z}\max_{y\in Y}\sum_{x\in Q(y)}(p\varpi^{\kappa_1}(x,z)+(1-p)\varpi^{\kappa_2}(x,z)) =\sum_{z\in Z}\max_{y\in Y}\left(p\sum_{x\in Q(y)}\varpi^{\kappa_1}(x,z)+(1-p)\sum_{x\in Q(y)}\varpi^{\kappa_2}(x,z)\right) \le\sum_{z\in Z}\left(p\max_{y\in Y}\sum_{x\in Q(y)}\varpi^{\kappa_1}(x,z)+(1-p)\max_{y\in Y}\sum_{x\in Q(y)}\varpi^{\kappa_2}(x,z)\right) =p\sum_{z\in Z}\max_{y\in Y}\sum_{x\in Q(y)}\varpi^{\kappa_1}(x,z)+(1-p)\sum_{z\in Z}\max_{y\in Y}\sum_{x\in Q(y)}\varpi^{\kappa_2}(x,z)\le p\cdot 1+(1-p)\cdot 1=1 And bam, we have our inequality demonstrated, everything works out. Now we just need to show that this family of Z-PoCK’s makes the Z-PUCK that’s the mixture of the two. We’ll establish equality by showing equality for all functions and all y. \Xi(y)(f)=\max_{\kappa\in\Pi}\kappa(y)(f)=\max_{\kappa\in{p\kappa_1+(1-p)\kappa_2|\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}}\kappa(y)(f) =\max_{\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}(p\kappa_1+(1-p)\kappa_2)(y)(f)=\max_{\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}p\kappa_1(y)(f)+(1-p)\kappa_2(y)(f) =p\max_{\kappa_1\in\Pi_1}\kappa_1(y)(f)+(1-p)\max_{\kappa_2\in\Pi_2}\kappa_2(y)(f)=p\Xi_1(y)(f)+(1-p)\Xi_2(y)(f) =(p\Xi_1+(1-p)\Xi_2)(y)(f) Done, we’ve shown equality of the Z-PUCK with the mixture of other Z-PUCKs, establishing that the mixture of Z-PUCKs is a Z-PUCK.
That leaves establishing our relevant inequality. But before we do that, we’ll be wanting a nice handy form for that asterisk operator to manipulate things with. Given some \kappa that’s a Z-PUCK over Q, remember from the previous proof that a valid choice for \mu and \phi to break \kappa down is \mu(z):=\max_{y'\in Y}\sum_{x'\in Q(y')}\varpi^{\kappa}(z,x') and, abbreviating things a little bit, we have \phi(y,z)(x):=\frac{\ind{x\in Q(y)}\varpi^{\kappa}(z,x)}{\mu(z)} So, we can get a pleasant-to-manipulate form for \Theta*\kappa as follows. (\Theta*\kappa)(\la xyz.f(x,y,z))=(\mu\sdp\Theta\sdp\phi)(\la xyz.f(x,y,z)) =\mu(\la z.\Theta(\la y.\phi(y,z)(\la x.f(x,y,z)))) And proceed further =\sum_{z\in Z}\mu(z)\cdot\Theta(\la y.\phi(y,z)(\la x.f(x,y,z))) =\sum_{z\in Z}\mu(z)\cdot\Theta(\la y.\sum_{x\in X}\phi(y,z)(x)\cdot f(x,y,z)) =\sum_{z\in Z}\mu(z)\cdot\Theta(\la y.\sum_{x\in X}\frac{\ind{x\in Q(y)}\varpi^{\kappa}(z,x)}{\mu(z)}\cdot f(x,y,z)) And then we move the constant into \Theta since it’s homogenous, and then into the sum, and it cancels out with the fraction. =\sum_{z\in Z}\Theta(\la y.\sum_{x\in X}\ind{x\in Q(y)}\cdot\varpi^{\kappa}(z,x)\cdot f(x,y,z)) =\sum_{z\in Z}\Theta(\la y.\sum_{x\in Q(y)}\varpi^{\kappa}(z,x)\cdot f(x,y,z)) =\sum_{z\in Z}\Theta(\la y.\ind{Q(y)\x{z}}\varpi^{\kappa}(\la x'z'.f(x',y,z'))) This general form will be used whenever we need to unpack \Theta*\kappa. Now, let’s get started on the proof of our subset inclusion thingy. As usual, \Pi will be the set {p\kappa_1+(1-p)\kappa_2|\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}, and as we’ve shown, that’s the set of Z-PoCK’s associated with p\Xi_1+(1-p)\Xi_2. Also, as we’ve already shown, the associated Z-polycontribution \varpi^{\kappa} for \kappa=p\kappa_1+(1-p)\kappa_2 is p\varpi^{\kappa_1}+(1-p)\varpi^{\kappa_2}. This will be implicitly used in the following. (\Theta*(p\Xi_1+(1-p)\Xi_2))(\la xyz.f(x,y,z))=\max_{\kappa\in\Pi}(\Theta*\kappa)(\la xyz.f(x,y,z)) Now we use our preferred unpacking of how that asterisk operator works. =\max_{\kappa\in\Pi}\sum_{z\in Z}\Theta(\la y.\ind{Q(y)\x{z}}\varpi^{\kappa}(\la x'z'.f(x',y,z'))) And unpack \kappa and \varpi^{\kappa} appropriately. =\max_{\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}\sum_{z\in Z}\Theta(\la y.\ind{Q(y)\x{z}}(p\varpi^{\kappa_1}+(1-p)\varpi^{\kappa_2})(\la x'z'.f(x',y,z'))) =\max_{\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}\sum_{z\in Z}\Theta(\la y.p\ind{Q(y)\x{z}}\varpi^{\kappa_1}(\la x'z'.f(x',y,z')) +(1-p)\ind{Q(y)\x{z}}\varpi^{\kappa_2}(\la x'z'.f(x',y,z'))) At this point, we use convexity of \Theta, since it’s an ultradistribution. \le\max_{\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}\sum_{z\in Z}(p\Theta(\la y.(\ind{Q(y)\x{z}}\varpi^{\kappa_1})(\la x'z'.f(x',y,z'))) +(1-p)\Theta(\la y.(\ind{Q(y)\x{z}}\varpi^{\kappa_2})(\la x'z'.f(x',y,z')))) =\max_{\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}(p\sum_{z\in Z}\Theta(\la y.(\ind{Q(y)\x{z}}\varpi^{\kappa_1})(\la x'z'.f(x',y,z'))) +(1-p)\sum_{z\in Z}\Theta(\la y.(\ind{Q(y)\x{z}}\varpi^{\kappa_2})(\la x'z'.f(x',y,z')))) At this point, you can pack up things. =\max_{\kappa_1\in\Pi_1,\kappa_2\in\Pi_2}p(\Theta*\kappa_1)(\la xyz.f(x,y,z))+(1-p)(\Theta*\kappa_2)(\la xyz.f(x,y,z)) =p\max_{\kappa_1\in\Pi_1}(\Theta*\kappa_1)(\la xyz.f(x,y,z))+(1-p)\max_{\kappa_2\in\Pi_2}(\Theta*\kappa_2)(\la xyz.f(x,y,z)) =p(\Theta*\Xi_1)(\la xyz.f(x,y,z))+(1-p)(\Theta*\Xi_2)(\la xyz.f(x,y,z)) =(p(\Theta*\Xi_1)+(1-p)(\Theta*\Xi_2))(\la xyz.f(x,y,z)) Done! \blacksquare
Proposition 4.3: Let X, Y and Z be finite sets, Q\subseteq Y\times X a relation, \kappa_{1},\kappa_{2}:Y\rightarrow\Delta^{c}(X\times Z) Z-PoCKs over Q, \Theta\in\HUC Y and p\in[0,1]. Then, p\kappa_{1}+(1-p)\kappa_{2} is also a Z-PoCK over Q, and \Theta*(p\kappa_{1}+(1-p)\kappa_{2})\subseteq p(\Theta*\kappa_{1})+(1-p)(\Theta*\kappa_{2})
Use Lemma 4, along with Z-PoCKs being a special case of Z-PUCKs.
Proposition 4.4: Let X, Y and Z be finite sets, Q\subseteq Y\times X a relation and \Xi:Y\rightarrow\HUC(X\times Z) a Z-PUCK over Q. Denote \Theta:=\top_{Y}\Xi. Define \beta_{0},\beta_{1}:Z\times Y\times X\rightarrow2^{Z\times Y} by \beta_{0}(z,y,x):={z}\times Q^{-1}(x), \textbf{\beta_{1}(z,y,x):=Z\times Q^{-1}(x)}. Then* (\Theta\ltimes\beta_{0})^{\downarrow}\subseteq\B(\Theta)\subseteq(\Theta\ltimes\beta_{1})^{\downarrow}
Proof: As usual, when establishing inequalities with downwards closures, we only have to verify the result for monotone functions. So, we may assume that f is monotone, and attempt to show that (\Theta\ltimes\beta_{0})(\la xyz\alp .f(x,y,z,\alp))\le\B_{Z\x Y}(\Theta)(\la xyz\alp.f(x,y,z,\alp)) \le(\Theta\ltimes\beta_{1})(\la xyz\alp.f(x,y,z,\alp)) Remember, bridge transforms cash out as a maximum over contributions, so to show the first inequality, we’ll need to build a contribution that matches or exceeds that first term, and that lands in the bridge transform of \Theta. For the second inequality, it’s considerably easier, we just use our Lemma 2 to figure out what sort of sets the bridge transform is supported on, swap out the sets it’s supported on for a bigger set upper bound, and bam, monotonicity of f takes over from there. From there, it’s easy to show the second inequality. Let’s unpack that first thing (\Theta\ltimes\beta_{0})(\la xyz\alp .f(x,y,z,\alp)) =\Theta(\la xyz.\delta_{\beta_0(x,z)}(\la\alp.f(x,y,z,\alp))) =\Theta(\la xyz.f(x,y,z,\beta_0(x,z))) And at this point we unpack what \Theta is. =(\top_Y*\Xi)(\la xyz.f(x,y,z,\beta_0(x,z))) And the \Xi. =\max_{\kappa\in\Pi}(\top_Y*\kappa)(\la xyz.f(x,y,z,\beta_0(x,z))) And then, \kappa can be broken down into some \mu^{\kappa} and \phi^{\kappa}, and that goes on both sides of \top_Y as our previous proposition shows. =\max_{\kappa\in\Pi}(\mu^{\kappa}\sdp\top_Y\sdp\phi^{\kappa})(\la xyz.f(x,y,z,\beta_0(x,z))) =\max_{\kappa\in\Pi}\mu^{\kappa}(\la z.\top_{Y}(\la y.\phi^{\kappa}(y,z)(\la x.f(x,y,z,\beta_0(x,z))))) =\max_{\kappa\in\Pi}\mu^{\kappa}(\la z.\max_{y}\phi^{\kappa}(y,z)(\la x.f(x,y,z,\beta_0(x,z)))) Now we can start filling in some data. There’s a maximizing \kappa^{}, so we can substitute that in. That gives us a canonical choice for what \mu^{\kappa^} and \phi^{\kappa^{}} are. Making that substitution, =\mu^{\kappa^}(\la z.\max_{y}\phi^{\kappa^}(y,z)(\la x.f(x,y,z,\beta_0(x,z)))) And then, let d:Z\to Y be the function mapping each particular z to the y which maximizes \phi^{\kappa^}(y,z)(\la x.f(x,y,z,\beta_0(x,z))). This lets us reexpress things as =\mu^{\kappa^}(\la z.\phi^{\kappa^}(d(z),z)(\la x.f(x,d(z),z,\beta_0(x,z)))) And now, we can start unpacking things a bit. =\mu^{\kappa^}(\la z.\delta_{d(z)}(\la y.\phi^{\kappa^}(y,z)(\la x.f(x,y,z,\beta_0(x,z))))) =\mu^{\kappa^}(\la z.\delta_{d(z)}(\la y.\phi^{\kappa^}(y,z)(\la x.\delta_{\beta_0(x,z)}(\la\alp.f(x,y,z,\alp))))) And now we can write things as just a giant semidirect product. =(\mu^{\kappa^}\sdp d\sdp\phi^{\kappa^{}}\sdp\beta_0)(\la xyz\alp.f(x,y,z,\alp)) Now we’ll show that this particular contribution lies in \B(\Theta).
Checking the support condition, we want to check for sure that y,z\in\alp, ie, the event y,z\not\in\alp has measure 0. Let’s begin. (\mu^{\kappa^}\sdp d\sdp\phi^{\kappa^{}}\sdp\beta_0)(\la xyz\alp.\ind{y,z\not\in\alp}) =\mu^{\kappa^}(\la z.\delta_{d(z)}(\la y.\phi^{\kappa^{}}(y,z)(\la x.\delta_{\beta_0(x,z)}(\la\alp.\ind{y,z\not\in\alp})))) Substitute in the dirac-deltas. =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{d(z),z\not\in\beta_0(x,z)})) Unpack what \beta_0(x,z) is. =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{d(z),z\not\in Q^{-1}(x)\x{z}})) Now, z\in{z} always occurs, so that indicator function is the same as just testing whether d(z)\in Q^{-1}(x). =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{d(z)\not\in Q^{-1}(x)})) Rephrasing things a little bit, =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{x\not\in Q(d(z))})) Then, from proposition 4.2, we remember that \la y.\phi^{\kappa^}(y,z) is a PoCK over Q. Ie, for any particular y, \phi^{\kappa^}(y,z) looks like a particular measure (\varpi^{\kappa^,z}) restricted to Q(y). So, in particular, \phi^{\kappa^}(d(z),z) must be supported over Q(d(z)). Put another way, with full measure, x\in Q(d(z)). So, this event failing has 0 measure. =\mu^{\kappa^*}(\la z.0)=0 And we’re done with that support condition.
Now to show the endofunction condition. As usual, we’ll let s:Y\x Z\to Y\x Z, and let g:X\x Y\x Z\to[0,1]. Actually, for conceptual clarity, since s:Y\x Z\to Y\x Z can be viewed as a pair of functions s_Y:Y\x Z\to Y and s_Z:Y\x Z\to Z, we’ll be using that formulation in our equation. (\mu^{\kappa^}\sdp d\sdp\phi^{\kappa^{}}\sdp\beta_0)(\la xyz\alp.\ind{s_Y(y,z),s_Z(y,z)\in\alp}g(s_Y(y,z),s_Z(y,z),x)) =\mu^{\kappa^}(\la z.\delta_{d(z)}(\la y.\phi^{\kappa^{}}(y,z)(\la x.\delta_{\beta_0(x,z)}(\la\alp.\ind{s_Y(y,z),s_Z(y,z)\in\alp}g(s_Y(y,z),s_Z(y,z),x))))) As usual, we’ll substitute in our dirac-deltas to simplify things. =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{s_Y(d(z),z),s_Z(d(z),z)\in\beta_0(x,z)}g(s_Y(d(z),z),s_Z(d(z),z),x))) Substitute in what \beta_0(x,z) is. =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{s_Y(d(z),z),s_Z(d(z),z)\in Q^{-1}(x)\x{z}}g(s_Y(d(z),z),s_Z(d(z),z),x))) Now, if that "this pair of points lies in this set" indicator function goes off, then s_Z(d(z),z)=z. So, we can substitute that into the g term afterwards. And then get a \le inequality by making the indicator function less strict. =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{s_Y(d(z),z),s_Z(d(z),z)\in Q^{-1}(x)\x{z}}g(s_Y(d(z),z),z,x))) \le\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{s_Y(d(z),z)\in Q^{-1}(x)}g(s_Y(d(z),z),z,x))) And reexpress the indicator function a little bit =\mu^{\kappa^}(\la z.\phi^{\kappa^{}}(d(z),z)(\la x.\ind{x\in Q(s_Y(d(z),z))}g(s_Y(d(z),z),z,x))) At this point, we can use that \phi^{\kappa^}(y,z) is \ind{Q(y)}\varpi^{\phi^{\kappa^},z} (ie, fixing z and varying y it just looks like you’re taking one measure and conditioning on various Q(y)), so reexpress things as =\mu^{\kappa^}(\la z.(\ind{Q(d(z))}\varpi^{\phi^{\kappa^},z})(\la x.\ind{x\in Q(s_Y(d(z),z))}g(s_Y(d(z),z),z,x))) And then, view the indicator function as just more conditioning. =\mu^{\kappa^}(\la z.(\ind{Q(d(z))\cap Q(s_Y(d(z),z))}\varpi^{\phi^{\kappa^},z})(\la x.g(s_Y(d(z),z),z,x))) And then, relax about what you’re conditioning on. \le\mu^{\kappa^}(\la z.\ind{Q(s_Y(d(z),z))}\varpi^{\phi^{\kappa^},z}(\la x.g(s_Y(d(z),z),z,x))) Rewrite it as a kernel again =\mu^{\kappa^}(\la z.\phi^{\kappa^}(s_Y(d(z),z),z)(\la x.g(s_Y(d(z),z),z,x))) Pull out the dirac-delta =\mu^{\kappa^}(\la z.\delta_{s_Y(d(z),z)}(\la y.\phi^{\kappa^}(y,z)(\la x.g(y,z,x)))) Throw one more inequality at it \le\mu^{\kappa^}(\la z.\max_y\phi^{\kappa^}(y,z)(\la x.g(y,z,x)))) Write it as top =\mu^{\kappa^}(\la z.\top_{Y}(\la y.\phi^{\kappa^}(y,z)(\la x.g(y,z,x)))) Write as a semidirect product =(\mu^{\kappa^}\sdp\top_Y\sdp\phi^{\kappa^})(\la yzx.g(y,z,x)) Reexpress =(\top_Y*\kappa^)(\la yzx.g(y,z,x)) \le\max_{\kappa\in\Pi}(\top_Y\kappa)(\la yzx.g(y,z,x)) =(\top_Y*\Xi)(\la yzx.g(y,z,x)) =\Theta(\la yzx.g(y,z,x)) And we’re done! endofunction condition shown. Our relevant contribution is in \B(\Theta). Let’s see, where were we… ah right, we had shown that for all monotone f, (\Theta\ltimes\beta_{0})(\la xyz\alp.f(x,y,z,\alp)) =(\mu^{\kappa^}\sdp d\sdp\phi^{\kappa^{}}\sdp\beta_0)(\la xyz\alp.f(x,y,z,\alp)) For some choice of d and \kappa^*. We know this is in \B(\Theta), so we get \le\max_{\con\in\B(\Theta)}\con(\la yz\alp x.f(x,y,z,\alp)) =\B(\Theta)(\la yz\alp x.f(x,y,z,\alp)) And we’re done! One inequality done. That just leaves showing the second inequality, where \beta_1(x)=Z\x Q^{-1}(x). It’s actually not too bad to show. Start with \B(\Theta)(\la xy\alp z.f(x,y,z,\alp)) =\max_{\con\in\B(\Theta)}\con(\la yz\alp x.f(x,y,z,\alp)) And then, we recall our Lemma 2, that if \Theta had its support entirely on x,y,z tuples where y,z in h(x) (for some h:X\to 2^{Y\x Z}), then all the \con\in\B(\Theta) would be supported on (x,\alp) pairs where \alp\subseteq h(x). And then, swapping out \alp for h(x), by monotonicity of f, produces a larger value.
To invoke this argument, our choice of h will be \beta_1, where \beta_1(x)=Q^{-1}(x)\x Z. We do need to show that \Theta is supported on such tuples. \Theta(\la xyz.\ind{y,z\not\in Q^{-1}(x)\x Z})=\Theta(\la xyz.\ind{y\not\in Q^{-1}(x)})=\Theta(\la xyz.\ind{x\not\in Q(y)}) =(\top_{Y}\Xi)(\la xyz.\ind{x\not\in Q(y)})=\max_{\kappa\in\Pi}(\top_{Y}\kappa)(\la xyz.\ind{x\not\in Q(y)}) =\max_{\kappa\in\Pi}(\mu^{\kappa}\sdp\top_{Y}\sdp\phi^{\kappa})(\la xyz.\ind{x\not\in Q(y)}) =\max_{\kappa\in\Pi}\mu^{\kappa}(\la z.\top_{Y}(\la y.\phi^{\kappa}(y,z)(\la x.\ind{x\not\in Q(y)}))) And then use that \phi^{\kappa}(y,z)=\ind{Q(y)}\varpi^{\phi^{\kappa},z} since it’s a PoCK in Q, to get =\max_{\kappa\in\Pi}\mu^{\kappa}(\la z.\top_{Y}(\la y.(\ind{Q(y)}\varpi^{\phi^{\kappa},z})(\la x.\ind{x\not\in Q(y)}))) Hm, we updated on a set, and are evaluating the indicator function for not being in the set. =\max_{\kappa\in\Pi}\mu^{\kappa}(\la z.\top_{Y}(\la y.0))=0 Ok, so this means we can invoke Lemma 2. We were previously at =\max_{\con\in\B(\Theta)}\con(\la yz\alp x.f(x,y,z,\alp)) So now we can invoke monotonicity and go \le\max_{\con\in\B(\Theta)}\con(\la yz\alp x.f(x,y,z,\beta_1(x))) And then invoke our endofunction property for the stuff in \B(\Theta), letting s be the identity function (and also y,z\in\alp occurs always) to establish a uniform upper bound of \le\Theta(\la xyz\alp.f(x,y,z,\beta_1(x))) =\Theta(\la xyz.\delta_{\beta_1(x)}(\la\alp.f(x,y,z,\alp))) =(\Theta\sdp\beta_1)(\la xyz\alp.f(x,y,z,\alp)) And we’re done! Second inequality demonstrated. \blacksquare
Corollary 4.5: Suppose that for any d\in\D, z\in\Gamma_{1} and \pi:\Hi\rightarrow\A s.t. (d,z)\in\Sup W(\pi), it holds that d\C z\pi. That is, the observations W predicts to receive from the computer are consistent with the chosen policy and W’s beliefs about computations. Let L:\D\rightarrow\R be a Cartesian loss function and \pi:\Hi\rightarrow\A a policy. Define \tilde{L}:\D\times\Gamma_{1}\rightarrow\R by \tilde{L}(h,z):=L(h). Then, (\pr{\el{\Gamma}}\B(\Theta_{W})\cap\Co{\pi}{fair})(\Ph)=W(\pi;\tilde{L})
To a large extent, this will follow the proof of the previous corollary. We’ll use \beta for 2^{\Gam} and \alp for just the policy component.
I’m going to be proceeding very cautiously here. First off, make our \pi value special by swapping it out for \pi^{} (\pr{\el{\Gamma}}\B(\Theta_{W})\cap\Co{\pi^}{\text{fair}})(\Ph) Now, by the identifications we made earlier, we can identify \Gam with \A^{\Hi}\x\Gam_1, the space of policies and computations. Using that to unpack the function a little bit, we have =(\pr{\el{\Gamma}}\B(\Theta_{W})\cap\Co{\pi^}{\text{fair}})(\la\pi z\beta.\Ph(\pi,z,\beta)) Now, we note that intersecting with top of a particular set is equivalent to updating on the indicator function for that set. Using definition 1.5 to unpack \Co{\pi^}{\text{fair}}, we get =(\pr{\el{\Gamma}}\B(\Theta_{W}))(\la\pi z\beta.\ind{\forall h\in\Hi_{\pi,z,\beta}:G^{\pi,z}(h)=\pi^{}(h)}\Ph(\pi,z,\beta)) Throughout we’ll be applying that G^{\pi,z}(h) is "what would the agent do on h if the agent is copying the behavior of \pi" (remember, part of the math is "what does the agent do in response to this history" and \pi is our term for that chunk of the math), so we’ll just always rephrase things like that and won’t bother to say G^{\pi,z}(h)=\pi(h) every time we do it. =(\pr{\el{\Gamma}}\B(\Theta_{W}))(\la\pi z\beta.\ind{\forall h\in\Hi_{\pi,z,\beta}:\pi(h)=\pi^{}(h)}\Ph(\pi,z,\beta)) Pull off the projection, and use d for a destiny in \D. =\B(\Theta_{W})(\la\pi z\beta d.\ind{\forall h\in\Hi_{\pi,z,\beta}:\pi(h)=\pi^{}(h)}\Ph(\pi,z,\beta)) Applying definition 3.1, and using d' instead of g to remember it’s a destiny, we have =\B(\Theta_W)(\la\pi z\beta d.\ind{\forall h\in\Hi_{\pi,z,\beta}:\pi(h)=\pi^{}(h)}\min_{ha:ha\in\X{\pi,z,\beta}}\max_{d':ha\sqsubseteq d'}L(d')) Next up is unpacking \X{\pi,z,\beta}. Using definition 3.1, it’s =\B(\Theta_W)(\la\pi z\beta d.\ind{\forall h\in\Hi_{\pi,z,\beta}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,z,\beta}\x\A\wedge(\forall(\pi',z')\in\beta:\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')) Now we’ll show that \Hi_{\pi,z,\beta} only depends on pr(\beta), ie, the projection of \beta from 2^{\A^{\Hi}\x\Gam_1} to 2^{\A^{\Hi}}, and that it gets smaller as \beta gets larger, so the function above is monotone. Let’s use definition 1.5 to unpack the event h\in\Hi_{\pi,z,\beta}. (\forall h'a'\sqsubset h,(\pi',z')\in\beta:\pi'(h')=a')\wedge(\exists d':h\sqsubset d'\wedge d'\C z\pi) It shouldn’t be too hard to tell that \beta getting larger makes this set smaller, and that, since the z' doesn’t matter, this condition is really the same as (\forall h'a'\sqsubset h,\pi'\in pr(\beta):\pi'(h')=a')\wedge(\exists d':h\sqsubset d'\wedge d'\C z\pi) So, we’ll use pr(\beta) to remind us that our function only depends on the projection of \beta. =\B(\Theta_W)(\la\pi z\beta d.\ind{\forall h\in\Hi_{\pi,z,pr(\beta)}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,z,pr(\beta)}\x\A\wedge(\forall\pi'\in pr(\beta):\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')) And now we can pull that out as a projection! We’re now using \alp for our set of policies, instead of \beta for our set of policy/computation pairs. =pr(\B(\Theta_W))(\la\pi z\alp d.\ind{\forall h\in\Hi_{\pi,z,\alp}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,z,\alp}\x\A\wedge(\forall\pi'\in\alp:\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')) To proceed further, we’re going to need to adapt our previous result which gets an upper and lower bound on the bridge transform of suitable \Theta. Our first order of business is checking that \alp getting larger makes the function larger, which is easy to check since \alp getting larger cuts down on the options to minimize over (increasing value), and makes the antecedent of the implication harder to fulfill, which makes the implication as a whole easier to fulfill, so the indicator function is 1 more often. Now we can proceed further. Abstracting away a bit from our specific function, which will be swapped out for some f:\A^{\Hi}\x\Gam_1\x\D\x 2^{\A^{\Hi}}\to[0,1] which is monotone in that last argument, we have (\Theta_W\sdp\beta_0)^{\downarrow}(\la z\pi d\beta.f(z,\pi,d,pr(\beta))) =(\Theta_W\sdp\beta_0)(\la z\pi d\beta.f(z,\pi,d,pr(\beta))) =\Theta_W(\la z\pi d.f(z,\pi,d,pr(\beta_0(z,d)))) =\Theta_W(\la z\pi d.f(z,\pi,d,pr({z}\x Q_0^{-1}(d)))) =\Theta_W(\la z\pi d.f(z,\pi,d,Q_0^{-1}(d))) =\Theta_W(\la z\pi d.f(z,\pi,d,pr(Z\x Q_0^{-1}(d)))) =\Theta_W(\la z\pi d.f(z,\pi,d,pr(\beta_1(d)))) =(\Theta_W\sdp\beta_1)(\la z\pi d\beta.f(z,\pi,d,pr(\beta))) =(\Theta_W\sdp\beta_1)^{\downarrow}(\la z\pi d\beta.f(z,\pi,d,pr(\beta))) Monotonicity was used to go back and forth between the downwards closure and the raw form.\beta_0 and \beta_1 are as they were in Proposition 4.4, and Q would be the relation on \A^{\Hi}\x\D telling you whether a policy is consistent with a destiny. Now, by Proposition 4.4, since the bridge transform of \Theta_W is sandwiched between those two values, and they’re both equal, we have pr(\B(\Theta_W))(\la z\pi d\alp.f(z,\pi,d,\alp))=\B(\Theta_W)(\la z\pi d\beta.f(z,\pi,d,pr(\beta))) =\Theta_W(\la z\pi d.f(z,\pi,d,Q_0^{-1}(d)))=(\Theta_W\sdp Q_0^{-1})(\la z\pi d\alp.f(z,\pi,d,\alp)) The first equality was just relocating the projection. the second equality was from the bridge transform being sandwiched between two equal quantities, so it equals all the stuff on our previous big list of equalities (we went with the middle one). Then just express as a semidirect product, and you’re done. Applying this to our previous point of =pr(\B(\Theta_W))(\la\pi z\alp d.\ind{\forall h\in\Hi_{\pi,z,\alp}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,z,\alp}\x\A\wedge(\forall\pi'\in\alp:\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')) We can reexpress it as =(\Theta_W\sdp Q_0^{-1})(\la\pi z\alp d.\ind{\forall h\in\Hi_{\pi,z,\alp}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,z,\alp}\x\A\wedge(\forall\pi'\in\alp:\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')) And start unpacking the semidirect product =\Theta_W(\la\pi zd.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)} \min_{ha:ha\in\Hi_{\pi,z,Q_0^{-1}(d)}\x\A\wedge(\forall\pi':dQ_0\pi'\to\pi'(h)=a)}\max_{d':ha\sqsubseteq d'}L(d')) Now, we’re going to try to address that minimum, and show that the only ha that fulfill the conditions are exactly those ha\sqsubseteq d. This requires showing that ha\sqsubseteq d is a sufficient condition to fulfill the relevant properties, and then to show that ha\not\sqsubseteq d implies a failure of one of the properties.
The proof for this is almost entirely identical to the corresponding proof in the non-turing-law case, there are no substantive differences besides one issue to clear up. We need to show that W(\pi) always being supported on the relation \C, for all \pi (as one of our starting assumptions) implies that \Theta_W is supported on \C as well. Here’s how we do it. We have \Theta_W=\top_{\A^{\Hi}}W. And then this ultracontribution (by the definition of \Gam_1-PUCK’s) can be written as the convex hull of the union of a bunch of ultracontributions of the form \top_{\A^{\Hi}}w, where w is a \Gam_1-PoCK. So, if we can show all of these are supported on the relation \C, then the same holds for the convex hull of their union, ie, \Theta_W. By Proposition 4.2, we can reexpress this ultracontribution as \mu^w\sdp\top_{\A^{\Hi}}\sdp\phi^w, where w(\pi)=\mu^w\sdp(\la z.\phi^{w}(z,\pi)), for any policy \pi. Now, let’s check the expectation value of the indicator function for \C being violated. (\mu^w\sdp\top_{\A^{\Hi}}\sdp\phi^w)(\la z\pi d.\ind{\neg d\C z\pi}) =\mu^w(\la z.\top_{\A^{\Hi}}(\la\pi.\phi^w(z,\pi)(\la d.\ind{\neg d\C z\pi}))) =\mu^w(\la z.\max_{\pi}\phi^w(z,\pi)(\la d.\ind{\neg d\C z\pi})) Let’s assume that the expectation of that indicator function is \emph{not} zero. Then there must be some particular z in the support of \mu^w where it is nonzero, and some particular \pi^ that attains that nonzero expectation value. So, there’s a z in the support of \mu^w and \pi^ s.t. \phi^w(z,\pi^)(\la d.\ind{\neg d\C z\pi^})>0 and so this means that we have \mu^{w}(\la z.\phi^w(z,\pi^)(\la d.\ind{\neg d\C z\pi^}))>0 Because that z is assigned nonzero measure, and then this reshuffles to (\mu^{w}\sdp(\la z.\phi^w(z,\pi^)))(\la d.\ind{\neg d\C z\pi^})>0 Which, via Proposition 4.2, is w(\pi^)(\la zd.\ind{\neg d\C z\pi^})>0 But this contradicts that W(\pi^) (and so all the w(\pi^)) were supported on the event d\C z\pi^*, so we have a contradiction, and our result follows.
Now that that issue is taken care of, we can swap out our fancy minimization with just minimizing over the ha\sqsubseteq d. =\Theta_W(\la\pi zd.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}\min_{ha:ha\sqsubseteq d}\max_{d':ha\sqsubseteq d'}L(d')) This minimization is attained by selecting d itself. So then it turns into =\Theta_W(\la\pi zd.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)) At this point, we’ll upper-and-lower-bound this quantity by W(\pi^)(\la zd.L(d)). Let’s begin. W(\pi^)(\la zd.L(d)) =\max_{w\in\Pi}w(\pi^)(\la zd.L(d)) =\max_{w\in\Pi}\mu^{w}(\la z.\phi^{w}(\pi^,z)(\la d.L(d))) Here we’re using Proposition 4.2 on being able to split up Z-PoCK’s (which W is) a certain way. =\max_{w\in\Pi}\mu^{w}(\la z.\max_{\pi}\phi^{w}(\pi^,z)(\la d.\ind{dQ_0\pi}L(d))) This equality happens because you can always just pick \pi^ as your choice of \pi, and since \phi^{w}(\pi^,z) can only produce destinies consistent with \pi^. Now, we can swap out \phi^{w} for the measure we’re conditioning on an event =\max_{w\in\Pi}\mu^{w}(\la z.\max_{\pi}(\ind{Q(\pi^)}\varpi^{\phi^{w},z})(\la d.\ind{dQ_0\pi}L(d))) =\max_{w\in\Pi}\mu^{w}(\la z.\max_{\pi}\varpi^{\phi^{w},z}(\la d.\ind{dQ_0\pi^}\ind{dQ_0\pi}L(d))) Reshuffle the indicator function back in =\max_{w\in\Pi}\mu^{w}(\la z.\max_{\pi}(\ind{Q(\pi)}\varpi^{\phi^{w},z})(\la d.\ind{dQ_0\pi^}L(d))) =\max_{w\in\Pi}\mu^{w}(\la z.\max_{\pi}\phi^{w}(\pi,z)(\la d.\ind{dQ_0\pi^}L(d))) =\max_{w\in\Pi}\mu^{w}(\la z.\top_{\A^{\Hi}}(\la\pi.\phi^{w}(\pi,z)(\la d.\ind{dQ_0\pi^}L(d)))) =\max_{w\in\Pi}(\mu^{w}\sdp\top_{\A^{\Hi}}\sdp\phi^{w})(\la \pi zd.\ind{dQ_0\pi^}L(d)))) =\max_{w\in\Pi}(\top_{\A^{\Hi}}w)(\la \pi zd.\ind{dQ_0\pi^}L(d)))) =(\top_{\A^{\Hi}}W)(\la \pi zd.\ind{dQ_0\pi^}L(d)))) =\Theta_W(\la \pi zd.\ind{dQ_0\pi^}L(d)))) Now we must show that this is a looser constraint than what was previously in our indicator function to proceed further. So our next order of business is showing that, certainly, \forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)\to dQ_0\pi^* Let d be an arbitrary destiny in the support of \Theta_W, and h be one of the history prefixes of d. The two conditions for h\in\Hi_{\pi,z,Q_0^{-1}(d)} are fulfilled, because they are \forall h',a',\pi':h'a'\sqsubset h\wedge dQ_0\pi'\to\pi'(h')=a' \exists d':h\sqsubset d'\wedge d'\C\pi z For the first condition, if h'a\sqsubset h, then h'a'\sqsubset d, and so if \pi' is consistent with d, it must take the same action in response to h', the action that d commands. So it is fulfilled. For the second condition, let d' be d. h\sqsubset d holds, and d\C\pi z holds certainly, because \Theta_W is supported on \C, as we’ve previously shown.
So, certainly, h\sqsubset d\to h\in\Hi_{\pi,z,Q_0^{-1}(d)}. Since we assumed our forall statement as prerequisite, this means that for all h\sqsubset d, \pi(h)=\pi^{}(h). And dQ_0\pi means \forall ha\sqsubseteq d:\pi(h)=a. Since \pi^{}(h) mimics \pi(h) for all history prefixes of d, this means \forall ha\sqsubseteq d:\pi^{}(h)=a, ie dQ_0\pi^{}.
So, since this is a looser constraint, when we were previously at =\Theta_W(\la \pi zd.\ind{dQ_0\pi^}L(d)))) we can proceed further to \ge\Theta_W(\la\pi zd.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)) which is the value we wanted to sandwich. Proceed further with =(\top_{\A^{\Hi}}W)(\la\pi zd.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)) =\max_{w\in\Pi}(\top_{\A^{\Hi}}w)(\la\pi zd.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)) =\max_{w\in\Pi}(\mu^{w}\sdp\top_{\A^{\Hi}}\sdp\phi^{w})(\la\pi zd.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)) =\max_{w\in\Pi}\mu^{w}(\la z.\top_{\A^{\Hi}}(\la\pi.\phi^{w}(\pi,z)(\la d.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d)))) =\max_{w\in\Pi}\mu^{w}(\la z.\max_{\pi}\phi^{w}(\pi,z)(\la d.\ind{\forall h\in\Hi_{\pi,z,Q_0^{-1}(d)}:\pi(h)=\pi^{}(h)}L(d))) \ge\max_{w\in\Pi}\mu^{w}(\la z.\phi^{w}(\pi^,z)(\la d.\ind{\forall h\in\Hi_{\pi^,z,Q_0^{-1}(d)}:\pi^(h)=\pi^{}(h)}L(d))) =\max_{w\in\Pi}\mu^{w}(\la z.\phi^{w}(\pi^,z)(\la d.L(d))) =\max_{w\in\Pi}(\mu^{w}\sdp(\la z.\phi^{w}(\pi^,z)))(\la zd.L(d))) =\max_{w\in\Pi}w(\pi^)(\la zd.L(d)))=W(\pi^)(\la zd.L(d))) And we’ve got the same upper and lower bound, so our overall quantity is W(\pi^)(\tilde{L}) And we’re done. \blacksquare