When constructing a high-level abstract causal DAG from a low-level DAG, one operation which comes up quite often is throwing away information from a node. This post is about how to do that. First, how do we throw away information from random variables in general? Sparknotes:
-
Given a random variable X, we can throw away information by replacing X with f(X) for some function f.
-
Given some other random variable Y, f(X) "contains all information in X relevant to Y" if and only if P[Y | f(X)] = P[Y | X]. In particular, the full distribution function (y \rightarrow P[Y=y | X]) is a minimal representation of the information about Y contained in X. For more explanation of this, see Probability as Minimal Map. For our purposes, starting from a low-level causal DAG, we want to:
-
Pick a node X_i
-
Pick a set of nodes X_S (with i \in S)
-
Replace X_i by f(X_i) for some function f, such that
-
P[X_\bar{S} | f(X_i)] = P[X_\bar{S} | X_i] Here \bar{S} denotes all the node indices outside S. (Specifying S rather than \bar{S} directly will usually be easier in practice, since X_S is usually a small neighborhood of nodes around X_i.) In English: we want to throw away information from X_i, while retaining all information relevant to nodes outside the set S. Two prototypical examples:
-
In a digital circuit, we pick the voltage in a particular wire at a particular time as X_i. Assuming the circuit is well designed, we will find that only the binary value bin(X_i) is relevant to voltages far away in the circuit or in time. So, with all "nearby" voltages as X_S, we can replace X_i by bin(X_i).
-
In a fluid, we pick the (microscopic) positions and momenta of all the particles in a little cell of spacetime as X_i. Assuming uniform temperature, identical particles, and some source of external noise—even just a little bit—we expect that only the total number and momentum of particles in the cell will be relevant to the positions and momenta of particles far away in space and time. So, with all "nearby" cells/particles as X_S, we can replace the microscopic positions and momenta of all particles in the cell with the total number and momentum of particles in the cell. In both examples, we’re throwing out "local" information, while maintaining any information which is relevant "globally". This will mean that local queries—e.g. the voltage in one wire given the voltage in a neighboring wire at the same time—are not supported; short-range correlations violate the abstraction. However, large-scale queries—e.g. the voltage in a wire now given the voltage in a wire a few seconds ago—are supported.
Modifying Children
We still have one conceptual question to address: when we replace X_i by f(X_i), how do we modify children nodes of X_i to use f(X_i) instead? The first and most important answer is: it doesn’t matter, so long as whatever they do is consistent with f(X_i). For instance, suppose X_i ranges over {-1, 0, 1}, and f(X_i) = X_i^2. When f(X_i) = 1, the children can act as though X_i were −1 or 1 - it doesn’t matter which, so long as they don’t act like X_i = 0. As long as the childrens’ behavior is consistent with the information in f(X_i), we will be able to support long-range queries. There is one big catch, however: the children do need to all behave as if X_i had the same value, whatever value they choose. The joint distribution P[X_{ch(i)}|X_{sp(i)}, f(X_i)] (where ch(i) = children of i and sp(i) = spouses of i) must be equal to P[X_{ch}(i)|X_{sp}(i), X_i^] for some value X_i^ consistent with f(X_i). The simplest way to achieve this is to pick a particular "representative" value X_i^(f^) for each possible value f^* of f(X_i), so that f(X_i^(f^)) = f^. Example: in the digital circuit case, we would pick one representative "high" voltage (for instance the supply voltage V_{DD}) and one representative "low" voltage (for instance the ground voltage V_{SS}). X_i^(f(X_i)) would then map any high voltages to V_{DD} and any low voltages to V_{SS}. Once we have our representative value function X_i^(f(X_i)), we just have the children use X_i^(f(X_i)) in place of X_i. If we want, we could even simplify one step further: we could just choose f to spit out representative values directly. That convention is cleaner for proofs and algorithms, but a bit more confusing for human usage and examples.
Instead of saying "f(X) contains all information in X relevant to Y", it would be better to say that, f(X) contains all information in X that is relevant to Y if you don’t condition on anything. Because it may be the case that if you condition on some additional random variable Z, f(X) no longer contains all relevant information.
Example:
Let X_1, X_2, Z be i.i.d. binary uniform random variables, i.e. each of the variables takes the value 0 with probability 0.5 and the value 1 with probability 0.5. Let X=(X_1, X_2) be a random variable. Let Y= X_1 \oplus X_2 \oplus Z be another random variable, where \oplus is the xor operation. Let f be the function f(X) = f((X_1, X_2)) = X_2.
Then f contains all information in X that is relevant to Y. But if we know the value of Z, then f no longer contains all information in X that is relevant to Y.
Comment
Good point, thanks.