Replicating and extending the grabby aliens model

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model

Contents

Summary

This report is the most comprehensive model to date of aliens and the Fermi paradox. In particular, it builds on Hanson et al. (2021) and Olson (2015) and focuses on the expansion of ‘grabby’ civilizations: civilizations that expand at relativistic speeds and make visible changes to the volume they control. This report considers multiple anthropic theories: the self-indication assumption (SIA), as applied previously by Olson & Ord (2022), the self-sampling assumption (SSA), implicitly used by Hanson et al. (2021) and a decision theoretic approach, as applied previously by Finnveden (2019). In Chapter 1, I model the appearance of intelligent civilizations (ICs) like our own. In Chapter 2, I consider how grabby civilizations (GCs) modify the number and timing of intelligent civilizations that appear.
In Chapter 3 I run Bayesian updates for each of the above anthropic theories. I update on the evidence that we are in an advanced civilization, have arrived roughly 4.5Gy into the planet’s roughly 5.5 Gy habitable duration, and do not observe any GCs. In Chapter 4 I discuss potential implications of the results, particularly for altruists hoping to improve the far future. Starting from a prior similar to Sandberg et al.’s (2018) literature-synthesis prior, I conclude the following: Using SIA or applying a non-causal decision theoretic approach (such as anthropic decision theory) with total utilitarianism, one should be almost certain that there will be many GCs in our future light cone. Using SSA[1], or applying a non-causal decision theoretic approach with average utilitarianism, one should be confident (~85%) that GCs are not in our future light cone, thus rejecting the result of Hanson et al. (2021). However, this update is highly dependent on one’s beliefs in the habitability of planets around stars that live longer than the Sun: if one is certain that such planets can support advanced life, then one should conclude that GCs are most likely in our future light cone. Further, I explore how an average utilitarian may wager there are GCs in their future light cone if they expect significant trade with other GCs to be possible. These results also follow when taking (log)uniform priors over all the model parameters. All figures and results are reproducible here.

Vignettes

To set the scene, I start with two vignettes of the future. This section can be skipped, and features terms I first explain in Chapters 1 and 2.

High likelihood vignette

In a Monte Carlo simulation of 10^6 draws, the following world described gives the highest likelihood for both SIA and SSA (with reference class of observers in intelligent civilizations). That is, civilizations like ours are both relatively common and typical amongst all advanced-but-not-yet-expansive civilizations in this world. In this world, life is relatively hard. There are five hard try-try steps of mean completion time 75 Gy, as well as 1.5 Gy of easy ‘delay’ steps. Planets around red dwarfs are not habitable, and the universe became habitable relatively late—intelligent civilizations can only emerge from around 8 Gy after the Big Bang. Around 0.3% of terrestrial planets around G-stars like our own are potentially habitable making Earth not particularly rare. Around 2.5% intelligent civilizations like our own become grabby civilizations (GCs). This is the SIA Doomsday argument in action. Around 7,000 GCs appear per observable universe sized volume (OUSV). GCs already control around 22% of the observable universe, and as they travel at 0.8c, their light has reached around 35% of the observable universe. Nearly all GCs appear between 10Gy and 18 Gy after the Big Bang. If humanity becomes a GC, it will be slightly smaller than a typical GC—around 62% of GCs will be bigger. A GC emerging from Earth would in expectation control around 0.1% of the future light cone and almost certainly contain the entire Laniakea Supercluster, itself containing at least 100,000 galaxies. The distribution of GC volumesThe CDF of the time until a GC is visible from Earth.The median time by which GCs will be visible to observers on Earth is around 1.5 Gy from now. It is practically certain humanity will not see any GCs any time soon: there is roughly 0.000005% probability (one in twenty million) that light from GCs reaches us in the next one hundred years[2]. GCs will certainly be visible from Earth in around 4 Gy. As we will see, SIA is highly confident in a future is similar to this one. SSA (with the reference class of observers in intelligent civilizations), on the other hand, puts greater posterior credence in human civilization being alone, even though worlds like these have high likelihood.

High decision-worthiness vignette

This world is one that a total utilitarian using anthropic decision theory would wager being in, if they thought their decisions can influence the value of the future in proportion to the resources that an Earth originating GC controls. In this world, there are eight hard steps, with mean hardness 23 Gy and delay steps totalling 1.8 Gy. Planets capable of supporting advanced life are not too rare: around 0.004% of terrestrial planets are potentially habitable. Again, planets around longer-lived stars are not habitable.
Around 90% of ICs become GCs, and there are roughly 150 GCs that appear per observable universe sized volume. GCs expand at 0.85c, and a GC emerging from Earth would reach 31% of our future light cone, around 49% of its maximum volume, and would be bigger than ~80% of all GCs. Since there are so few GCs, the median time by which a GC is visible on Earth is not for another 20 Gy. The distribution of GC volumesThe CDF of the time until a GC is visible from Earth.# 1 Intelligent Civilizations I use the term intelligent civilizations (ICs) to describe civilizations at least as technologically advanced as our own. In this chapter, I derive a distribution of the arrival times of ICs, \alpha(t). This distribution is dependent on factors such as the difficulty of the evolution of life and the number of planets capable of supporting intelligent life. This distribution does* not* factor in the expansion of other ICs, that may prevent (‘preclude’) later ICs from existing. That is the focus of Chapter 2. The distribution gives the number of other ICs that arrive at the same time as human civilization, as well as the typicality of the arrival time of human civilization, assuming no ICs preclude any other.

The universe

I write t_{now} for the time since the Big Bang, which is estimated 13.787 Gy (Ade 2016) [Gy = gigayear = 1 billion years]. Current observations suggest the universe is most likely flat (the sum of angles in a triangle are always 180°), or close to flat, and so the universe is either large or infinite. Further, the universe appears to be on average isotropic (there are no special directions in the universe) and homogeneous (there are no special places in the universe) (Saadeh et al. 2016, Maartens 2011). The large or infinite size implies that there are volumes of the universe causally disconnected from our own. The collection of ‘parallel’ universes has been called the "Level I multiverse". Assuming the universe is flat, Tegmark (2007) conservatively estimates that there is a Hubble volume identical to ours 10^{10^{115}} m away, and an identical copy of you 10^{10^{29}}m away. I consider a large finite volume (LFV) of the level I multiverse, and partition this LFV into observable universe size (spherical) volumes (OUSVs)[3] . My model uses quantities as averages *per OUSV. *For example, \alpha(t) will be the rate of ICs arriving per OUSV on average at time t. The (currently) observable universe necessarily defines the limit of what we can currently know, but not what we can eventually know. The *eventually observable universe *has a volume around 2.5 times that of the volume of the currently observable universe (Ord 2021). The most action relevant volume for statistics about the number of alien civilizations is the affectable universe, the region of the universe that we can causally affect. This is around 4.5% of the volume of the observable universe. I will use the term affectable universe size volumes (AUSVs). For an excellent discussion on this topic, I recommend Ord (2021).

The steps to reach life

I consider the path to an IC as made up of a number of steps:

Try-try steps

Abiogenesis is the process by which life has arisen from non-living matter. This process may require some extremely rare configuration of molecules coming together, such that one can model the process as having some rate 1/​a of success per unit time on an Earth-sized planet. The completion time of such a try-try step is exponentially distributed with PDF q_a(t) = \frac{1}{a} \cdot e^{-t/a}. Fixing some time T, such as Earth’s habitable duration, the step is said to be hard if T \ll a. When the step is hard, for t \ll T, q_a(t) \approx \frac{1}{a} is constant since e^{-t/a} \approx 1. Abiogenesis is one of many try-try steps that have led to human civilization. If there are m try-try steps with expected times of completiona_1,...,a_m, the completion time of the steps has hypoexponential distribution with parameter \bar{a}=(a_1,...,a_m). For modelling purposes, I split these try-try steps into delay steps and hard steps.
I define the delay steps to be the maximal set of individual steps d_1,...,d_k from the steps a_1,...,a_m such that d := \sum_{i=1}^{k} d_i \leq 4.5 \ Gy, the approximate duration life has taken on Earth so far. I then approximate the completion time of the delay try-try steps with the exponential distribution with parameter d . If they exist, I also include any fuse steps[4] in the sum of d. I write h_1,...,h_n for the expected completion times of the n=m-k remaining steps. These steps are not necessarily hard with respect to Earth’s habitable duration. I model each h_i to have log-uniform uncertainty between 1 Gy and 10^{20}Gy. With this prior, most h_i are much greater than 5 Gy and so hard. I approximate the completion time of all of these steps with the Gamma distribution parametersn and h := (h_1h_2 \cdots h_n)^{1/n}, the geometric mean hardness of the try-try steps.[5] The Gamma distribution can further be described as a ‘power law’ as I discuss in the appendix. I write f_{n,h,d}(t) for the PDF of the completion time of all the delay steps and hard try-try steps. Strictly, it is given as the convolution of the gamma distribution parameters n,h and exponential distribution parameter d. When d \ll t, f_{n,h,d}(t) \approx g_{n,h}(t-d) where g_{n,h}(t) is the PDF of the Gamma distribution. That is, the delay steps can be approximated as completing in their expected time when they are sufficiently short in expectation. **Priors on **n After introducing each model parameter, I introduce my priors. Crucially, all the results in Chapter 3 roughly follow when taking (log)uniform priors over all parameters and so my particular prior choices are not too important. I consider three priors on n, the number of hard try-try steps. The first, which I call balanced, is chosen to give an implied prior number of ICs similar to existing literature estimates (discussed later in this chapter). My bullish prior puts greater probability mass on fewer hard steps and so implies a greater number of ICs. My bearish prior puts greater probability mass in many hard steps and so predicts fewer ICs. My three priors on n. The bullish, balanced and bearish priors have distributions \mathrm{Geometric}(0.7), \mathrm{Geometric}(0.35) and \mathrm{Geometric}(0.2)respectively, truncated to n \leq 20.My priors on n are uninformed by the timing of life on Earth, but weakly informed by discussion of the difficulty of particular steps that have led to human civilization. For example, Sandberg et al. (2018) (supplement I) consider the difficulty of abiogenesis. In Chapter 3 I update on the time that all the steps are completed (i.e., now). I do not update on the timing of the completion of any potential intermediate hard steps, such as the timing of abiogenesis. Further, I do not update n on the habitable time remaining, which is implicitly an anthropic update. I discuss this in the appendix. **Prior on **h Given these priors on n, I derive my prior on h by the geometric mean of n draws from the above-mentioned \mathrm{LogUniform}(1\ Gy,10^{20} \ Gy). I chose this prior to later give estimates of life in line with existing estimates. A longer tailed distribution is arguably more applicable. My prior on h for fixed values of n, p(h|n). For higher n the distribution p(h|n) centres increasingly around 1 Gy = \sqrt{ 1\ Gy \cdot 10^{20} \ Gy}My marginalised prior on h for each of my three priors on n. **Prior on **d My prior on the sum of the delay and fuse steps d has d \sim \mathrm{LogUniform}(0.1 \ Gy, 4.5 \ Gy). By definition d < 4.5 \ Gy and d smaller than 0.1 Gy makes little difference. My prior distribution gives median \sqrt{0.1 \ Gy \cdot 4.5 \ Gy} \approx 0.7 \ Gy. The delay parameter d can also include the delay time between a planet’s formation and the first time it is habitable. On Earth, this duration could have been up to 0.6 Gy (Pearce et al. (2018)).

Try-once steps

I also model "try-once" steps, those that either pass or fail with some probability. The Rare Earth hypothesis is an example of a try-once step. The possibility of try-once steps allows one to reject the existence of hard try-try steps, but suppose very hard try-once steps. I write w for the probability of passing through all try-once steps. That is, if there are l try-once steps w_1, w_2,..., w_l then w = P(w_1) \cdot P(w_2|w_1)\cdot \cdots P(w_l|w_1, w_2,...,w_{l-1}) My prior on w is distributed w \sim 10^{-\mathrm{Exp}(2)} . This allows for no try-once steps (w=1). The prior could arguably have a longer tail, and is loosely informed by discussion of potential Rare Earth factors here.## Habitable planets The parameters above give can give distribution of appearance times of an IC on a given planet. In this section, I consider the maximum duration planets can be habitable for, the number of potentially habitable planets, and the formation of stars around which habitable planets can appear.

The maximum planet habitable duration

I write L_{max}[6] for the maximum duration any planet is habitable for.[7] The Earth has been habitable for between 4.5 Gy and 3.9 Gy (Pearce 2018) and is expected to be habitable for another~1 Gy, so as a lower bound L_{max} ⪆ 5 Gy. Our Sun, a G-type main-sequence star, formed around 4.6 Gy ago and is expected to live for another ~5 Gy. Lower mass stars, such as K-type stars (orange dwarfs) have lifetimes between 15 −30 Gy , and M-type stars (red dwarfs) have lifetimes up to 20,000 Gy. These lifetimes give an upper bound on the habitable duration of planets in that star’s system, so I consider L_{max} up to around 20,000 Gy. The habitability of these longer-lived stars is uncertain. Since red dwarf stars are dimmer (which results in their longer lives), habitable planets around red dwarf stars must be closer to the star in order to have liquid water, which may be necessary for life. However, planets closer to their star are more likely to be tidally locked. Gale (2017) notes that "This was thought to cause an erratic climate and expose life forms to flares of ionizing electro-magnetic radiation and charged particles." but concludes that in spite of the challenges, "Oxygenic Photosynthesis and perhaps complex life on planets orbiting Red Dwarf stars may be possible". My prior on L_{max} is distributed L_{max} \sim 5 \ Gy + 10^{Exp(0.7)}, truncated to L_{max} \leq 20,000Gy. This prior disfavours the habitability of longer lived stars. As I later show, this prior is mostly washed out by the anthropic update against the habitability of planets around longer lived stars. In the appendix, I also consider variants of this prior. This approach to modelling does not allow for planets around red dwarf stars that are habitable for periods equal to the habitable period of Earth. For example, life may only be able to appear in a crucial window in a planet’s lifespan.

Number of habitable planets

Given a value of L_{max}, I now consider the number of habitable planets. To derive an estimate of the number of potentially habitable planets, I only consider the number of terrestrial planets: planets composed of silicate rocks and metals with a solid surface. Recall that the parameter w can indirectly control the number of these actually habitable. Zackrisson et al. (2016) estimate 10^{19} terrestrial planets around FGK stars and 5 \cdot 10^{20} around M stars in the observable universe. Interpolating, I set the total number of terrestrial planets around stars that last up to L_{max} per OUSV to be T(L_{max}) = 5 \cdot 10^{18} \cdot (L_{max} \ \mathrm{in} \ Gy)^{0.5}Hanson et al. (2021) approximate the cumulative distribution of planet lifetimes L with H_{L_{max}}(L) \propto L^{0.5} for L \leq L_{max} and H(L) = 1 for L \geq L_{max}. The fraction of planets formed at time b habitable at time t is then given by 1-H(t-b). These forms of H_{L_{max}}(L) and T(L_{max}) satisfy the property that for anyL_1 < L_2<L_{max}, the expression T(L_{max}) [H_{L_{max}}(L_2)-H_{L_{max}}(L_1)] - the number of planets per OUSV habitable for between L_1 and L_2 Gy—is independent of L_{max}. In particular, the number of planets habitable for the same duration as Earth is independent of L_{max}. This is implicitly used later in the update: one does not need to explicitly condition on the observation that we are on a planet with habitable for ~5 Gy since the number of planets habitable for ~5 Gy is independent of the model parameters.

The formation of habitable stars

I use the term "habitable stars" to mean stars with solar systems capable of supporting life. I follow Hanson et al. (2021) in approximating the habitable star formation rate with the functional form \hat{\varrho(t)} \propto t^\lambda \cdot \exp(-t/\varphi) with power \lambda =3 and decay \varphi = 4 \ Gy where \int_{0}^{\infty}\hat{\rho}(t) \mathrm{d} t = 1. Plots of \hat{\varrho}(t), (t) for three pairs of (\lambda, \varphi) , with peak \lambda \cdot \varphi = 12 \ Gy### The habitability of the early universe There is debate over the time the universe was first habitable. Loeb (2016) argues for the universe being habitable as early as 10 My. There is discussion around how much gamma-ray bursts (GBRs) in the early universe prevent the emergence of advanced life. Piran (2014) conclude that the universe was inhospitable to intelligent life > 5 Gy ago. Sloan et al. (2017) are more optimistic and conclude that life could continue below the ground or under an ocean. I introduce an early universe habitability parameter u and function \gamma_u(t) which gives the fraction of habitable planets capable of hosting advanced life at time t relative to the fraction at t_{now}. I take \gamma_u(t) to be a sigmoid function with \gamma_u(t_{now}) \approx 1 and\gamma_u(0) = u (hence u \in(0,1)). My prior on u is log-uniform on (10^{-10}, 0.99) The early universe habitability factor, \gamma_u(t). for varying u. A more sophisticated approach would consider the interaction between and the hard try-try steps, as suggested by Hanson et al. (2021).

The number of habitable planets at a given time

The number of planets terrestrial planets per OUSV habitable at time t is T(L_{max}) \cdot \gamma_u(t) \cdot \int_{0}^{t} \hat{\varrho}(b) \cdot [1-H_{L_{max}}(t-b)] \ \mathrm{d}bSince T_{L_{max}}(t-b) = 0 for t-b \geq L_{max}, the lower bound of the integral can be changed to \max(0, t-L_{max}).

Arrival of ICs

Putting the previous sections together, the appearance rate of ICs per OUSV, \alpha(t), is given by \alpha(t) = w \cdot \gamma_u(t) \cdot T(L_{max}) \cdot \int_{\max(0, t-L_{max})}^{t} f_{n,h,d}(t-b) \cdot \hat{\varrho}(b) \cdot[1-H(t-b)] \mathrm{d}bTo recap:

The earliness paradox

Depending on one’s choice of anthropic theory, one may update towards hypotheses where human civilization is more typical among the reference class of all ICs. Here, I look at human civilization’s typicality using two pieces of data: human civilization’s arrival at t_{now} and the fact that we have appeared on a planet habitable for ~5 Gy. **An atypical arrival time? ** I write \hat{\alpha}(t) for the arrival time distribution \alpha(t) normalised to be a probability density function. This tells us how typical human civilization’s arrival time t_{now} is. That is, \hat{\alpha}(t_{now}) is the probability density of a randomly chosen (eventually) existing IC to have arrived at t_{now}. Plots of \hat{\alpha}(t) for varying n, all with d = 1 \ Gy, u=0.1, w=1 and h=10^{10} \ Gy.. The left-hand plots have L_{max}=10 \ Gy and right hand plots have L_{max}=100 \ Gy. When planets are habitable for a longer duration, a greater fraction of life appears later. Further, when n is greater, fewer ICs appear overall since life is harder, but a greater fraction of ICs appear later in their planets’ habitable windows – this is the power law of the hard steps. Human civilization’s rank, the fraction of ICs that arrive before t_{now} . For many possible combinations of n and L_{max}, human civilization appears highly early. This graph has h=10^{10} \ Gy, d = 1 \ Gy, u=0.1. The rank is independent of w. The distribution of human civilization’s rank, by my three priors. By my priors, human civilization is somewhat but not incredibly early. An atypical solar system? There are many more terrestrial planets around red dwarf stars than stars like our own. If these systems are habitable, then human civilization is additionally atypical (with respect to all ICs) in its appearance around a star like our sun. Further, life has a longer time to evolve around a longer lived star, so human civilization would be even more atypical. Haqq-Misra et al. (2018) discuss this, but do not consider that the presence of hard try-try steps leads to a greater fraction of ICs appearing on longer-lived planets. Resolving the paradox Suppose a priori one believes L_{max} > 10 \ Gy and n \geq 2 and uses an anthropic theory that updates towards hypotheses where human civilization is more typical among all ICs. Given these assumptions, one expects the vast majority of ICs to appear much further into the future and on planets around red dwarf stars. However, human civilization arrived relatively shortly after the universe first became habitable, on a planet that is habitable for only a relatively short duration and is thus very atypical (according to our arrival time function that *does not *factor in the preclusion of ICs by other ICs.
There are multiple approaches to resolving this apparent paradox. First, one can reject their prior belief in high n and L_{max}, and update towards small n and L_{max} which lead us to believing we are in a more typical IC. Second, one could change the *reference class *among which human civilization’s typicality is being considered. This, in effect, is changing the question being asked.[8]

The Fermi paradox

Some anthropic theories update towards hypotheses where there are a greater number of civilizations that make the same observations we do (containing observers like us). The rate of XICs I write N_{XIC} for the rate of ICs per OUSV with feature X, where X denotes "ICs arriving at tnow on a planet that has been habitable for as long as Earth has, and will be habitable for the same duration as Earth will be". The Earth has been habitable for between 4.5 Gy and 3.9 Gy (Pearce et al. 2018). I suppose that Earth has been habitable for 4.5 Gy, since if habitable for just 3.9 Gy, the 600 My difference can be (lazily) modelled as a fuse or delay step. Assuming for the time being that no IC precludes any other, this gives N_{XIC} \propto w \cdot \gamma_u(t) \cdot f_{n,h,d} (4.5 \ Gy)Note that

2 Grabby Civilizations

It may be hard for humanity to observe a typical IC, especially if they do not last long or emit enough electromagnetic radiation to be identified at large distances. If some fraction of ICs persist for a long time, expand at relativistic speeds, and make visible changes to their volumes, one can more easily update on the Fermi observation. Such ICs are called grabby civilizations (GCs). The existence of sufficiently many GCs can ‘solve’ the earliness paradox by setting a deadline by which ICs must arrive, thus making ICs like us more typical in human civilization’s arrival time. In this chapter, I derive an expression for #N_{XIC}, the rate of ICs per OUSV that have arrived at the same time as human civilization on a planet habitable for the same duration and do not observe any GCs.

Observation of GCs

Humanity has not observed any intelligent life. In particular, we have not observed any GCs. Whether GCs are not in our past light cone or we have not yet seen them yet is uncertain. GCs may deliberately hide or be hard to observe with humanity’s current technology. It seems clearer that humanity is not inside a GC volume, and at minimum we can condition on this observation.[9] In Chapter 3 I compute two distinct updates: one conditioning on the observation that there are no GCs in our past light cone, and one conditioning on the *weaker *observation that we are not inside a GC volume. If GCs prevent any ICs from existing in their volume, this latter observation is equivalent to the statement that "we exist in an IC". The second observation leaves ‘less room’ for GCs, since we are conditioning on a larger volume not containing any GCs. I lean towards there being no GCs in our past light cone. By considering the waste heat that would be produced by Type III Kardashev civilizations (a civilization using all the starlight of its home galaxy), the G-survey found no type III Kardashev civilizations using more than 85% of the starlight in 105 galaxies surveyed (Griffith et al. 2015). There is further discussion on the ability to observe distant expansive civilizations in this LessWrong thread.

The transition from IC to GC

I write f_{GC} for the average fraction of ICs that become GCs.[10] I assume that this happens in an astronomically short duration and as such can approximate the distribution of arrival time of GCs as equal to the distribution of arrival times of ICs. That is, the arrival time distribution of GCs is given by f_{GC} \cdot \alpha(t). It seems plausible a significant fraction of ICs will choose to become GCs. Since matter and energy are likely to be instrumentally useful to most ICs, expanding to control as much volume as they can (thus becoming a GC) is likely to be desirable to many ICs with diverse aims. Omohundro (2008) discusses instrumental goals of AI systems, which I expect will be similar to the goals of GCs (run by AI systems or otherwise). Some ICs may go extinct before being able to become a GC. The extinction of an IC does not entail that no GC emerges. For example, an unaligned artificial intelligence may destroy its origin IC but become a GC itself. (Russell 2021). ICs that trigger a (false) vacuum decay that expands at relativistic speeds can also be modelled as GCs. My prior on f_{GC} is distributed \sim 10^{-\mathrm{Exp}(0.4)} truncated to f_{GC} \geq 0.01I do not update on the fact we have not observed any ICs. The smaller f_{GC}, the greater the importance of the evidence that we have not seen any ICs.

The expansion of GCs

I model GCs as all expanding spherically at some constant comoving speed v. My prior on v is distributed \sim 10^{-\mathrm{Exp}(0.43)} truncated to v \geq 0.01c. This distribution prior has a median 0.5c and is informed by Armstrong & Sandberg (2013) considerations of designs for self-replicating probes that travel at speeds v= 0.5c, 0.8c and 0.9c.### The volume of an expanding GC To calculate the volume of an expanding GC, one must factor in the expansion of the universe. Solving the Friedmann equation gives the cosmic scale factor a(t), a function that describes the expansion of the universe over time. a'(t)^2 = H_0\cdot (\Omega_m \cdot a(t)^{-1} + \Omega_r \cdot a(t)^{-2} + \Omega_\Lambda a(t)^2)With initial condition a(t_{now}) = 1 and H_0, \Omega_m, \Omega_r and \Omega_\Lambda given by Ade et al. (2016). The Friedmann equation assumes the universe is homogeneous and isotropic, as discussed in Chapter 1.
The scale factor a(t). The period after ~9.8Gy is known as the dark-energy-dominated era: there is accelerating expansion.Throughout, I use comoving distances which give a distance that does not change over time due to the expansion of space. The comoving distance a probe travelling at speed v that left at time b reaches by time t is \int_b^t\frac{v}{a(t')} \mathrm{d} t' .The comoving volume of a GC at time t that has been growing at speed v since time b is V(b,t,v) = \frac{4\pi}{3} \cdot (\int_b^t \frac{v}{a(t')} \mathrm{d}t')^3I take V(b,t,v) in units of fraction of the volume of an OUSV, approximately 4.2 \cdot 10^{14} Mly^3. The volume reached by a GC expanding from t_{now} for different speeds. Regardless of speed, expansion stops by around 150 Gy: this is the beginning of the era of isolation, where travel will be possible only within gravitationally bound structures (such as the Milky Way). The fraction of the observable universe a GC can expand to as a function of its expansion start date and speed, supposing it is not blocked by any other GC. Supposing humanity expands at 0.5c, delaying colonisation by 100 years results in about 0.0000019% loss of volume. Due to the clumping of stars in galaxies and galaxies in clusters, it’s possible this results in no loss of useful volume.### The fraction of the universe saturated by GCs Following Olson (2015) I write g(t) for the average fraction of OUSVs unsaturated by GCs at time t and take functional form g(t) = \exp(-\int_0^t f_{GC} \cdot \alpha(b) \cdot V(b,t) \mathrm{d} b)Recall that the product f_{GC} \cdot \alpha(b) is the rate of GCs appearing per OUSV at time b. Since \alpha(\cdot) is a function of the parameters n,h,d, w, L_{max} and u, the functiong(t) is too. This functional form for g(t) assumes that when GCs bump into other GCs, they do not speed up their expansion in other directions. Above: g(t) for n=6, d=1 \ Gy, L_{max}= 10\ Gy, w=1, u=10^{-10}, f_{GC}=1 , v=0.8c and varying h . Relatively small changes in the geometric mean hardness of the hard steps leads to large changes in the fraction of each OUSV eventually saturated by GCs.A heatmap of g(t) for varying h, nand fixedd=1 \ Gy, w=0.1, L_{max}=10\ Gy, u=0.1f_{GC} = 1 and v=c. Only for a small fraction of pairs (n,h) is the eventual fraction of OUSVs saturated by GCs is neither very close to 0 nor exactly 1.The actual volume of a GC I write #V(b,t,v) for the expected actual volume of a GC at time t that began expanding at time b at speed v. Trivially, #V(b,t,v) \leq V(b,t,v) since GCs that prevent expansion can only decrease the actual volume. If GCs are sufficiently rare, then #V(b,t,v) \approx V(b,t,v). I derive an approximation for #V in the appendix. Later, I use the actual volume of a GC as a proxy for the total resources it contains. On a sufficiently large scale, mass (consisting of intergalactic gas, stars, and interstellar clouds) is homogeneously distributed within the universe. This proxy most likely underweights the resources of later arriving GCs due to the gravitational binding of galaxies and galaxy-clusters. A comparison of V to #V for a GC beginning expansion at t_{now} at speed v=c with n=5,h=5000 \ Gy, d=1\ Gy, w=1 ,L_{max}= 10 \ Gy,u=10^{-10} and f_{GC}=1. In this case, a GC emerging from Earth eventually contains 24% of our future light cone.The distribution of expected actual GC volumes using the same parameters to directly above. In this case, there are 380 GCs per OUSV, of which around 6% are larger than an Earth originating GC.## A new arrival time distribution The distribution of IC arrival times,\alpha(t), can be adjusted to account for the expansion of GCs, which preclude ICs from arriving. I define \beta(t) := \alpha(t) \cdot g(t) that gives the rate of ICs appearing per OUSV, and write #N_{IC} := \int_0^\infty \alpha(t) \cdot g(t) \mathrm{d} t for the number of ICs that actually appear per OUSV. Plots of \alpha(t), g(t) and \beta(t) for n=3, h=10^{5.5} Gy, d=1 \ Gy , w=1, L_{max} = 10 \ Gy, u=0.1, f_{GC}=1 and v=0.8cAbove: Plots of \beta(t) with n=3, d= 1 \ Gy, w=0.1 , L_{max} = 10 \ Gy, u=0.1, f_{GC}=1, v=0.8c and varying h. Plots of \hat{\beta}(t) = \beta(t) /\int_0^\infty \beta(t) \mathrm{d}t with the same parameters as above (these are just the graphs to the above but each rescaled).A heatmap #N_{IC} varying n and h with fixed d=1 \ Gy, w=0.1, L_{max}=10\ Gy, u=10^{-5},, f_{GC}=1 and v=c . We see that the number of ICs that actually appear per OUSV is bounded above by around 10^6, even when life is sufficiently easy (as given by n and h) that many more ICs would appear if there was no preclusion. This loose upper bound is primarily determined by v, the speed of expansion: when expansion speeds are lower, more ICs can appear ## The actual number of XICs I define # N_{XIC} to be the actual number of ICs with feature X to appear, accounting for the expansion of GCs. I consider two variants of this term. I write # N_{XIC, v=c} for the rate of ICs with feature X per OUSV that do not observe GCs. Since information about GCs travels at the speed of light, g_{v=c}(t) gives the fraction of OUSVs that is unsaturated by light from GCs at time t. Then, # N_{XIC, v=c} = N_{XIC} \cdot g_{v=c}(t_now)gives the number of XICs per OUSV with no GCs in their past light cone.
Similarly, I write # N_{XIC, v=v} [11]for the rate of ICs with feature X per OUSV that are not inside a GC volume, where v is the expansion speed of GCs. In this case, # N_{XIC, v=v} = N_{XIC} \cdot g_{v=v}(t_{now}). Left and right: heatmaps of # N_{XIC, v=c} for varying hard steps n and geometric mean hardness h. Both heatmaps show the same data, but the colour scale goes with the logarithm on the plot on the left, and linearly on the right. Both take d= 1 \ Gy,w=0.1, L_{max} = 10 \ Gy, u=0.1 , f_{GC} =1 and v=c. The black area in the left heatmap contains pairs of (n,h) such that no XICs actually appear, due to the all OUSVs being saturated by light from GCs by t_{now}.
The green area on the right heatmap is the ‘sweet spot’ where the most number of XICs appear. This happens to be just above the border between the black and green area in on the left heatmap. In this ‘sweet-spot’, there are many ICs (including XICs) but not too many such that XICs are (all) precluded. My bearish, balanced and bullish priors have 16%, 26% and 44% probability mass in cases where the universe is fully saturated with light from GCs by t_{now} (and so # N_{XIC, v=c} = 0) respectively.

The balancing act

The Fermi observation limits the number of early arriving GCs: when there are too many GCs the existence of observers like us is rare or impossible. For anthropic theories that prefer more observers like us, there is a push in the other direction. If life is easier, there will be more XICs.
For anthropic theories that prefer observers like us to be more typical, there is potentially a push towards the existence of GCs that set a cosmic deadline and lead to human civilization not being unusually early. In the next chapter, I derive likelihood ratios for different anthropic theories and produce results.

3 Likelihoods & Updates

I’ve presented all the machinery necessary for the updates, other than the anthropic reasoning. I hope this chapter is readable without knowledge of the previous two. I now apply three approaches to dealing with anthropics:

I update on either the observation I label X_c or observation I label X_v. Both X_c and X_v include observing that we are in an IC that

SIA

I use the following definition of the self-indication assumption (SIA), slightly modified from Bostrom (2002) All other things equal, one should reason as if they are randomly selected from the set of all [12]possible observer moments (OMs) [a brief time-segment of an observer].[13] Applying the definition of SIA, P_{SIA}(X|W_i) = \frac{|XOMs|i}{\sum{j} |OMs|_j} \propto |XOMs|_iThat is, SIA updates towards worlds where there are more OMs like us. Since the denominator is independent of i, we only need to calculate the numerator, |XOMs|_i. By my choice of definitions, |XOMs|i is proportional to # N{XIC}, the number of ICs with feature X that actually appear per OUSV. The constant of proportionality is given by the number of OMs per IC, which I suppose is independent of model parameters, as well as the number of OUSVs in the earlier specified large finite volume. Again, these constants is unnecessary due to the normalisation. The three summary statistics implied by the posterior are below. As mentioned before, the updates are reproducible here. **Updating with observation Xc**Updating with observation Xv**SIA updates overwhelmingly towards the existence of GCs in our light cone from all three of my priors. If a GC does not emerge from Earth, most of the volume will be expanded into by other GCs. I discuss some marginal posteriors here, and reproduce all the marginal posteriors in the appendix. SIA updates towards smaller f_{GC} as the existence of more GCs can only decrease the number of observers like us. This is the "SIA Doomsday" described by Grace (2010). This result is the same as found by Olson & Ord (2021) whereby the prior on f_{GC} goes from prior to posterior P(f_{GC}) \mapsto P(f_{GC})/f_{GC} . The SIA update is overwhelmingly towards smaller L_{max}. Increasing L_{max} only increases the number of GCs that could preclude XICs. SIA posteriors on f_{GC}SIA posteriors on L_{max}## SSA I use the following definition of the self-sampling assumption (SSA), again slightly modified from Bostrom (2002) All other things equal, one should reason as if they are randomly selected from the set of all actually existent observer moments (OMs) in their reference class.[14] A reference class R is a choice of some subset of all OMs. Applying the definition of SSA with reference class R, P_{SSA, R}(X|W_i) = \frac{|RXOMS|i}{|ROMs|i}That is, SSA updates towards worlds where observer moments like our own are more common in the reference class.
I first consider two reference classes, R
{ICs} and R
{all}. The reference class R_{ICs} contains only OMs contained in ICs, and no OMs in GCs. This is the reference class implicitly used by Hanson et al. (2021). The reference class R_{all} also includes observers in GCs. I later consider the minimal reference class, containing only observers who have identical experiences, paired with non-causal decision theories.

Small reference class RICs

This is the reference class implicitly used by Hanson et al. (2021). I reach different conclusions from Hanson et al. (2021), and discuss a possible error in their paper in the appendix.
The total number of OMs in R_{ICs} is proportional to the number of ICs, # N_{IC}. As in the SIA case, the number of XOMs is proportional to # N_{XIC}, so the likelihood ratio is # N_{XIC}/#N_{IC}. Updating with observation Xc****Updating with observation XvSSA has updated away from the existence of GCs in our future light cone. In the appendix, I discuss how this update is highly dependent on the lower bound on the prior for L_{max} . Again, smaller L_{max} is unsurprisingly preferred. SSA R_{ICs} posterior on L_{max}### Large reference class R_{all} This reference class contains all OMs that actually exist in our large finite volume, and so includes OMs that GCs create. It is sometimes called the "maximal" reference class[15].
I model GCs as using some fraction of their total volume to create OMs. I suppose that this fraction and the efficiency of OM creation are independent of the model parameters. These constants do not need to be calculated, since they cancel when normalising.
The total volume controlled by all GCs is proportional to 1-g(t_L), the average fraction of OUSVs saturated by GCs at some time t_L when all expansion has finished[16]. I assume that a single GC creates many more OMs than are contained in a single ICs. Since my prior on f_{GC} has f_{GC} \geq 0.01 and I expect GCs to produce many OMs, I see this as a safe assumption. This assumption implies the total number of OMs as proportional to 1-g(t_L). The SSA R_{all} likelihood ratio is # N_{XIC}/[1-g(t_L)]. I do not see this update as not particularly informative, since I expect GCs to create simulated XOMs., which I explore later in this chapter. **Updating with observation Xc****Updating with observation Xv**[17] Notably, SSA R_{all} updates towards as small v as possible, since increasing the speed of expansion increases the number of observers created that are not like us — the denominator in the likelihood ratio. As with the SSA R_{ICs} update, this result is sensitive to the prior on L_{max}, which I discuss in the appendix.

Non-causal decision theoretic approaches

In this section, I apply non-causal decision theoretic approaches to reasoning about the existence of GCs. This chapter does not deal with probabilities, but with ‘wagers’. That is, how much one should behave as if they are in a particular world.
The results I produce are applicable to multiple non-causal decision theoretic approaches. The results are applicable for someone using SSA with the minimal reference class (R_{min}) paired with a non-causal decision theory, such as evidential decision theory (EDT). SSA R_{min} contains only observers identical to you, and so updating using SSA Rmin simply removes any world where there are no observers with the same observations as you, and then normalises. The results are also applicable for someone (fully) sticking with their priors (being ‘updateless’) and using a decision theory such as anthropic decision theory (ADT). ADT, created by Armstrong (2011), converts questions about anthropic probability to decision problems, and Armstrong notes that "ADT is nothing but the Anthropic version of the far more general ‘Updateless Decision Theory’ and ‘Functional Decision Theory’".

Application

I suppose that all decision relevant ‘exact copies’ of me (i.e. instances of my current observations) are in one of the following situations

This gives the degree to which I should wager my decisions on being in a particular world. Total utilitarianism The number of copies of me in ICs that become GCs is proportional tof_{GC} \cdot # N_{XIC}. The expected actual volume of such GCs is # V(t_{now}, t_{L}, v). Using the assumption that our influence is linear in resources, the decision worthiness of each world is f_{GC} \cdot # N_{XIC} \cdot # V(t_{now}, t_{L},v)I use the label "ADT total" for this case. **Updating with observation X_cUpdating with observation **X_vTotal utilitarians using a non-causal decision theory should behave as if they are almost certain of the existence of GCs in their future light cone. However, the number of GCs is fairly low—around 40 per AUSV. Average utilitarianism As before, the number of copies of me in ICs that become GCs is proportional to f_{GC} \cdot # N_{XIC} and again the expected actual volume of such a GC is given by # V(t_{now}, t_L, v) The resources of all GCs is proportional to 1-g(t_L). Supposing that GCs create moral patients in proportion to their resources, the decision worthiness of each world is f_{GC} \cdot # N_{XIC} \cdot # V(t_{now}, t_{L},v) \cdot [1-g(t_L)]^{-1}I use the label "ADT average" for this case. **Updating with observation X_cUpdating with observation **X_vAn average utilitarian should behave as if there are most likely no GCs in the future light cone. As with the SSA updates, this update is sensitive to the prior on L_{max} and is explored in an appendix.

Interaction with GCs

I now model two types of interactions between GCs: trade and conflict. The model of conflict that I consider decreases the decision worthiness of cases where there are GCs in our future light cone. I show that a total utilitarian should wager as if there are no GCs in their future light cone if they think the probability of conflict is sufficiently high. The model of trade I consider increases the decision worthiness of cases where there are GCs in our future light cone. I show that an average utilitarian should wager that there are GCs in their future light cone if they think there are sufficiently large gains from trade with other GCs. The purpose of these toy examples is to illustrate that a total or average utilitarian’s true wager with respect to GCs may be more nuanced than presented earlier. Total utilitarianism and conflict Suppose we are in the contrived case where:

Ancestor simulations

In the future, an Earth originating GC may create simulations of the history of Earth or simulate worlds containing counterfactual human civilizations. I call these ancestor simulations (AS).
Bostrom (2003) concludes that at least one of the following is true:

Historical simulations

As well as running simulations of their own past, GCs may create simulations of other ICs. GCs may be interested in the values or behaviours of other GCs they may encounter, and can learn about the distribution of these by running simulations of ICs. I use the term historical simulations (HS) to describe a behaviour of simulating ICs where the distribution of simulated ICs is equal to the true distribution of ICs. That is, the simulations are representative of the outside world, even if GCs run the simulations one IC at a time.

Other OMs

GCs may create many other OMs, simulated or not, of which none are XOMs. For example, a post-human GC may create a simulated utopia of OMs. I use the term other OMs as a catch-all term for such OMs.

Simulation budget

I model GCs as either

Most XOMs are in simulations

I first give an example to motivate the claim that when GCs create simulated XOMs, the majority of all XOMs are in such simulations rather than being in the ‘basement-level’. Bostrom (2003) estimates that the resources of the Virgo Supercluster, a structure that contains the Milky Way and could be fully controlled by an Earth-originating GC, could be used to run 10^{29} human lives per second, each containing many OMs. Around 10^{11} humans have ever lived: if we expect a GC to emerge in the few centuries, it seems unlikely more than 1012 humans will have lived by this time. In this case, only 10^{-17} (one hundred million trillionths) of all a GC’s resources would need to be used for a single second to create an equal number of XOMs to the number of basement-level XOMs.
When GCs create AS or HS, I assume that the number of XOMs in AS or HS far exceeds the number of XOMs in XICs. That is, most observers like us are in simulations. Both SIA and SSA R_{all} support the existence of simulations of XOMs, holding all else equal, creating simulated XOMs (trivially) increases the number XOMs and the ratio |XOMs|/​|OMs|.

Likelihood ratios

I first calculate |XOMs| for each simulation behaviour. These give the SIA likelihood ratios. As previously discussed in the SSA R_{all} case, I suppose that the vast majority of OMs are in GCs and so are created in proportion to the resources controlled by GCs,1-g(t_L). Dividing by |XOMs| by 1-g(t_L)then gives the SSA R_{all} likelihood ratio. GCs create|XOMs| is proportional to[23]: DerivationAS fixedf_{GC} \cdot # N_{XIC}I assume that the fixed number of OMs is much greater than 1/f_{GC}, this means one can approximate all XOMs as contained in AS. The number of XICs that actually appear is # N_{XIC} of which f_{GC} will become GCs. HS fixedf_{GC} \cdot # N_{XIC}The total number of GCs that appear is f_{GC} \cdot # N_{IC} . Each creates some average number of HS each containing some average constant number of XOMs. The fraction of ICs in HS which are XICs is # N_{XIC}/ # N_{IC}. The product of these terms is f_{GC} \cdot #N_{XIC} Intuitively, this is equal to the AS fixed case as the same ICs are being sampled and simulated, but the distribution of which GC-simulates-which-IC has been permuted. AS resource proportionalf_{GC} \cdot # N_{XIC} \cdot # V(t_{now}, t_L, v)The number of GCs that create AS containing XICs is f_{GC} \cdot #N_{XIC}.
The number of AS each of these GCs creates is proportional to the actual volume each would control, #V(t_{now},t_L,v) HS resource proportional\frac{#N_XIC}{# N_{IC}} \cdot [1-g(t_L)]Of all HS created, #N_{XIC}/#N_{IC}will be of XICs. The total number of HS created is proportional to the average fraction of OUSVs saturated by GCs, 1-g(t_L) Note that above the derivations give the equivalences between

SIA updates

**Simulation behaviourUpdating with observation X_cUpdating with observation **X_vAS fixed /​ HS fixedHS resource proportional### SSA R_{all} updates **Simulation behaviourUpdating with observation X_cUpdating with observation **X_vAS fixed /​ HS fixed# 4 Conclusion

Summary of results

Anthropic theory SIAADT total utilitarianismADT average utilitarianismSSA R_{all}SSA R_{ICs}No XOMs1 4 5 6 8 HS-fixed2 4 5 7 8 AS-fixed2 4 5 7 8 HS-rp 3 4 5 8 8 AS-rp4 4 5 5 8 In the above table, the left column gives the shorthand description of GC simulation-creating behaviour. Equivalent updates have the same colour and number. The posterior credence in being alone in the observable universe, conditioned on observation X_c.Prior12345678Bullish<0.1%<0.1%<0.1%0.2%70%68%69%64%Balanced<0.1%<0.1%<0.1%0.2%89%89%89%85%Bearish<0.1%<0.1%<0.1%0.2%94%95%95%92% These results replicate previous findings:

Which anthropic theory?

My preferred approach is to use a non-causal decision theoretic approach, and reason in terms of wagers rather than probabilities. Within the choice of utility function in finite worlds, forms of total utilitarianism are more appealing to me. However, it seems likely that the world is infinite and that aggregative consequentialism must confront infinitarian paralysis—the problem that in infinite worlds one is ethically indifferent between all actions. Some solutions to infinitarian paralysis require giving up on the maximising nature of total utilitarianism (Bostrom (2011)) and may look more averagist[24]. However, interaction with other GCs—such as through trade—make it plausible that even average utilitarians behave as if GCs are in their future light cone. Having said this, theoretical questions remain with the use of non-causal decision theories (e.g. comments here on UDT and FDT).

Why does this matter?

If an Earth-originating GC observes another GC, it will most likely not be for hundreds of millions of years. By this point, one may expect such a civilization to be technologically mature and any considerations related to the existence of aliens redundant. Further, any actions we take now may be unable to influence the far future. Given these concerns, are any of the conclusions action-relevant? Primarily, I see these results being most important for the design of artificial general intelligence (AGI). It seems likely that humanity will hand off control of the future, inadvertently or by design, to an AGI. Some aspects of an AGI humanity builds may be locked-in, such as its values, decision theory or commitments it chooses to make.
Given this lock-in, altruists concerned with influencing the far future may be able to influence the design of AGI systems to reduce the chance of conflict between this AGI and other GCs (presumably also controlled by AGI systems). Clifton (2020) outlines avenues to reduce cooperation failures such as conflict.

Astronomical waste?

Bostrom (2003) gives a lower bound of 10^{14} biological human lives lost per second of delayed colonization, due to the finite lifetimes of stars. This estimate further does not include stars that become impossible for a human civilization due to the expansion of the universe. The existence of GCs in our future light cone may strengthen or weaken this consideration. If GCs are aligned with our values, then even if a GC never emerges from Earth, the cosmic commons may still be put to good use. This does not apply when using SSA or a non-causal decision theory with average utilitarianism, which expect that only a human GC can reach much of our future light cone.

SETI

The results have clear implications for the search for extraterrestrial intelligence (SETI). One key result is the strong update against the habitability of planets around red dwarfs. For the self-sampling assumption or a non-causal decision theoretic approach with average utilitarianism, there is great value of information on learning whether such planets are in fact suitable for advanced life: if they are, SSA strongly endorses the existence of GCs in our future light cone, as discussed in the appendix. SIA, or a non-causal decision theoretic approach with total utilitarianism, is confident in the existence of GCs in our future light cone regardless of the habitability of red dwarfs. The model also informs the probability of success of SETI for ICs in our past lightcone. Such ICs may not be visible to us now if they were too quiet for us to notice or did not persist for a long time.
The distribution of the probability that an XIC has an IC-that-did-not-become a GC in their past light cone, in by the posteriors in from our balanced prior with the update when conditioning that there were no GCs in our past light cone## Risks from SETI Barnett (2022) discusses and gives an admittedly "non-robust" estimate of "0.1-0.2% chance that SETI will directly cause human extinction in the next 1000 years".
I consider the implied posterior distribution on the probability of a GC becoming observable in the next thousand years. The (causal) existential risk from GCs is strictly smaller than the probability that light reaches us from at least one GC, since the former entails the latter. The distribution of the implied probability that we we observe a GC in the next 1,000 years, conditioned on no GCs in our past light cone The distribution of the implied probability that a GC reaches Earth in the next 1,000 years, conditioned on no GCs having reached us alreadyThe posteriors imply a relatively negligible chance of contact (observation or visitation) with GCs in the next 1,000 years even for SIA.
However, it seems that the risk in the next is then more likely to come from GCs that are already potentially observable that we have just not yet observed—perhaps more advanced telescopes will reveal such GCs.

Further work

I list some further directions this work could be taken. All the calculations can be found here. I have not updated on all the evidence available. Further evidence one could update on includes:

Acknowledgements

I would like to thank Daniel Kokotajlo for his supervision and guidance. I’d also like to thank Emery Cooper for comments and corrections on an early draft, and Lukas Finnveden and Robin Hanson for comments on a later draft. The project has benefited from conversations with Megan Kinniment, Euan McClean, Nicholas Goldowsky-Dill, Francis Priestland and Tom Barnes. I’m also grateful to Nuño Sempere and Daniel Eth for corrections on the Effective Altruism Forum. Any errors remain my own. This project started during Center on Long-Term Risk’s Summer Research Fellowship.

Glossary

nThe number of hard try-try stepshThe geometric mean of the hard steps ("hardness")dThe sum of the delay and fuse steps, strictly less than Earth’s habitable duration.wThe probability of passing through all try-once steps in the development of an IC L_{max}The maximum duration a planet can be habitable foruThe decay power of gamma ray burstsvThe average comoving speed of expansion of GCsf_{GC}The fraction of ICs that become GCsICIntelligent civilization XICIntelligent civilizations similar to human civilization in that

Appendix: Updating n on the time remaining

I discuss how using the remaining habitable time on Earth to update on the number of hard steps n is implicitly an anthropic update. In particular I discuss it in the context of Hanson et al. (2021) (henceforth "they" and "their"). They later perform another anthropic update, using a different reference class, which I see as problematic. Their prior on n is derived by using the self-sampling assumption with the reference class of observers on planets habitable for ~5 Gy (the same as Earth). I write R_{5 \ Gy}for this reference class. Throughout, I ignore delay steps, and include only hard try-try steps. They argue (as I see correctly) that to be most typical within this reference class, and observe that Earth is habitable for another ~1 Gy, we update towards 3 \lessapprox n \lessapprox 8. The SSA R_{5 \ Gy} likelihood ratio when updating n on our appearance time alone (ignoring preclusion by GCs) is \frac{f_{n,h}(4.5 \ Gy)}{\int_0^{5 \ Gy} f_{n,h}(t) \mathrm{d} t }wheref_{n,h}(\cdot) is the Gamma distribution PDF with shape n and scale h. I take h = 10^{10} \ Gy. This likelihood ratio is largest for n \approx 5. We could further condition on the time that life first appeared, but this is not necessary to illustrate the point. The normalised SSA likelihood ratio when updating on the completion time on Earth.While their prior on n relies on this small reference class, their main argument relies on a larger reference class of all intelligent civilizations, R_{ICs}. They use this to model humanity’s birth rank as uniform in the appearance times of all advanced life, not just those habitable for ~5 Gy. If we use the smaller reference class R_{5 \ Gy} Gy throughout, then one updates towards 3 \lessapprox n \lessapprox 8, but human civilization is no longer particularly early since all life on planets habitable for ~5 Gy appears in the next ~50 Gy due to the end of star formation. The existence of GCs will have less explanatory power in this case. If one uses the larger reference class R_{ICs}, when updating n on human civilization’s appearance time alone (ignoring preclusion by GCs), the SSA likelihood ratio is \frac{f_{n,h}(4.5 \ Gy)}{\int_{5 \ Gy}^{L_{max}} K(L) \cdot \int_0^L f_{n,h}(t) \mathrm{d} t \ \mathrm{d} L}Where L_{max} is the maximum habitable duration, and K(L)is the ‘number’ of planets habitable for L Gy.
The SSA R_{ICs}likelihood ratio for when there are two types of planets of equal number: one type habitable for ~5 Gy and another for 100 Gy.If we believe L_{max} to be large, then the likelihood ratio is maximum at n=1 and is decreasing in n: if advanced life is hard then it will appear more often on planets where it has longer to evolve and increasing n makes life harder, so decreases the total amount of advanced life and increases the fraction of life on longer habitable planets. The reference class R_{ICs} converges to R_{5 \ Gy} when decreasing L_{max} to 5 Gy, and one updates towards 3 \lessapprox n \lessapprox 8. To summarise, the following are ‘compatible’

Appendix: Varying the prior on L_{max}

The SSA R_{ICs}, SSA R_{all} and ADT average updates are sensitive to the lower bound on the prior for L_{max}. When there are no GCs (that can preclude ICs), human civilization’s typicality is primarily determined by L_{max}: the smaller the more typical human civilization is. If L_{max} is certainly high, worlds with GCs that preclude ICs are relatively more appealing to SSA.
Here I show updates for variants on the prior for L_{max}, and otherwise using the balanced prior. Notably, even when L_{max} ~ \mathrm{LogNormal}(\mu = 500 \ Gy, \sigma = 1.5) which has P(L_{max}<10 \ Gy) = 0.3%, SSA R_{ICs} gives around 58% credence on being alone, and has posterior P(L_{max}<10 \ Gy) = 55 %. As seen below, increasing the lower bound on the prior of L_{max} increases the posterior implied rate of GCs. Implied posterior on # N_{GC}Posterior on L_{max}SSA R_{ICs}SSA R_{all}ADT average# Appendix: Marginalised posteriors The following tables show the marginalised posteriors for all updates (excluding the trade and conflict scenarios).
X_cX_v# Appendix: Updates from uniform priors I show that the results follow when taking uniform/​loguniform priors on the model parameters as follows:

Appendix: Vacuum decay

Technologies to produce false vacuum decay or other highly destructive technologies will have a non-zero rate of ‘detonation’. Such technologies could be used accidentally, or deliberately as a scorched Earth policy during conflict between GCs. Non-gravitationally bound volumes of the universe will become causally separated by ~200 Gy, after which GCs are safe from light speed decay. The model presented can be used to estimate the fraction of OUSVs consumed by such decay bubbles. I write f_{VD} for the fraction of ICs that trigger a vacuum decay some time shortly after they become an IC. More relevantly, one may consider vacuum decay events being triggered when GCs meet one another. The fraction of OUSVs inside vacuum decay bubbles for varying f_{VD}, the fraction of ICs that trigger vacuum decay bubbles that travel at c. These plots have n=5, h=100 \ Gy, d=0.5 \ Gy, w=10^{-5}, L_{max} = 5 \ Gyu=10^{-6} and v=0.8c . Even for f_{VD} = 0.001, around 50% of the OUSVs on average will be eventually consumed by vacuum decay bubbles travelling at the speed of light.Of course, this is highly speculative, but suggestive that such considerations may change the behaviour of GCs before the era of causal separation. For example, risk averse or pure time discounting GCs may trade off some expansion for creation of utility. One could run the entire model with f_{GC} replaced by f_{VD}. SSA R_{ICs} supports the existence of GCs for L_{max} \geq 30\ Gy and so would similarly support the existence of ICs that trigger false vacuum decay as a deadline.

Appendix: hard steps and the ‘power law’

As mentioned, I model the completion time of hard steps with the Gamma distribution, which has PDF f_{n,h}(t) = \frac{1}{\Gamma(n)} \cdot \frac{1}{h^n} \cdot t^{n-1} \cdot \exp(-t/h)When t \ll h, \exp(-t/h) \approx 1 and so f_{n,h}(t) \propto t^{n-1}. That is, when the steps are sufficiently hard, the probability of completion grows as a polynomial in t. Increasing n leads to a greater ‘clumping’ of completions near the end of the possible time available.
The distribution of completion times for n=1 and h=10^{10} \ Gy. The PDF (red) is constant, CDF (blue) grows linearly.The distribution of completion times for n=6 andh=10^{10} \ Gy. The PDF (red) grows approximately t^5 and CDF (blue) grows approximately t^6When hard steps are present, it also means that longer habitable planets will see a greater fraction of life than shorter lived planets. For example, a planet habitable for 50 Gy will have approximately (50 \ Gy/5 \ Gy)^n = 10^n greater probability of life appearing than a planet habitable for 5 Gy. For anthropic theories that update towards worlds where observers like us are more typical—such as the self-sampling assumption—increasing n while allowing longer-lived planets makes observers like us less typical.

Comment

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=MSiXCbfTumreDGLeu

How strongly Grabby aliens theory depends on the habitability of the planets near red drafts? I have read pretty good arguments that they will never become habitable. Two reasons: powerful magnetic explosions on red drafts will strip them of atmospheres and water; + the planets will become tidally locked soon and radioactive energy decay in cores will run out in a few billions years, so they will be geologically dead in 5-10 billions years.

Comment

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=2atpdKMg5eNHCpJZ4

The habitability of planets around longer lived stars is a crux for those using SSA, but not SIA or decision theoretic approaches with total utilitarianism. I show in this section that if one is certain that there are planets habitable for at least 20 \ Gy , then SSA with the reference class of observers in pre-grabby intelligent civilizations gives ~30% on us being alone in the observable universe. For 50 \ Gy this gives ~10% on being alone.

Comment

Thanks!

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=mmTGJLDFkTow6CnBX

I haven’t yet read this, but do you have a brief explanation for how your results differ from Hanson et al’s?

Comment

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=pXMakw4mcHd6yrmpk

Using SSA> [1], or applying a non-causal decision theoretic approach with average utilitarianism, one should be confident (~85%) that GCs are not in our future light cone, thus rejecting the result of Hanson et al. (2021). However, this update is highly dependent on one’s beliefs in the habitability of planets around stars that live longer than the Sun: if one is certain that such planets can support advanced life, then one should conclude that GCs are most likely in our future light cone. Further, I explore how an average utilitarian may wager there are GCs in their future light cone if they expect significant trade with other GCs to be possible. Basically, Hanson et al made a mistake with their anthropics. Or so it seems; see first appendix.

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=xgBRnaqL5Zvfmdcbp

Great report. I found the high decision-worthiness vignette especially interesting.

I haven’t read it closely yet, so people should feel free to be like "just read the report more closely and the answers are in there", but here are some confusions and questions that have been on my mind when trying to understand these things:

Has anyone thought about this in terms of a "consequence indication assumption" that’s like the self-indication assumption but normalizes by the probability of producing paths from selves to cared-about consequences instead of the probability of producing selves? Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?

I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.)

SIA and SSA mean something different now than when Bostrom originally defined them, right? Modern SIA is Bostrom’s SIA+SSA and modern SSA is Bostrom’s (not SIA)+SSA? Joe Carlsmith talked about this, but it would be good if there were a short comment somewhere that just explained the change of definition, so people can link it whenever it comes up in the future. (edit: ah, just noticed footnote 13)

SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers? A strong great filter that lies in our future seems like it would require enough revisions to our world model to make SIA doom basically a variant of the simulation argument, i.e. the best explanation of our ability to colonize the stars not being real would be the stars themselves not being real. Many other weird hypotheses seem like they’d become more likely than the naive world view under SIA doom reasoning. E.g., maybe there are 10^50 human civilizations on Earth, but they’re all out of phase and can’t affect each other, but they can still see the same sun and stars. Anyway, I guess this problem doesn’t turn up in the "high decision-worthiness" or "consequence indication assumption" formulation.

Comment

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=wDtmSiRuL5qje8gxL

Great report. I found the high decision-worthiness vignette especially interesting. Thanks! Glad to hear it Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that? Yep, this is kinda what anthropic decision theory (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA. I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.) Yeah, this is a great point. Toby Ord mentions here the potential for dark energy to be harnessed here, which would lead to a similar conclusion. Things like this may be Pascal’s muggings (i.e., we wager our decisions on being in a world where our decisions matter infinitely). Since our decisions might *already * matter ‘infinitely’ (evidential-like decision theory plus an infinite world) I’m not sure how this pans out. SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers? Exactly. SSA (with a sufficiently large reference class) always predicts Doom as a consequence of its structure, but SIA doomsday is contingent on the case we happen to be in (colonisers, as you mention).

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=WMrxzrsZhAwPEDLh7

If you assume SIA, it strongly favours interstellar panspermia, and in that case, all grabby aliens will be in our galaxy, while other galaxies will be mostly dead. This means shorter timelines before meeting them. Could your model be adapted to take this into account? Could your model also include a possibility of the SETI-attack: grabby aliens sending malicious radio signals with AI description ahead of their arrival?

Comment

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=y9NSJgoK6RHwapRRb

Could your model also include a possibility of the SETI-attack: grabby aliens sending malicious radio signals with AI description ahead of their arrival? I briefly discuss this in Chapter 4. My tentative conclusion is that we have little to worry about in the next hundred or thousand years, especially (which I do not mention) if we think malicious grabby aliens to try particularly hard to have their signals discovered.

Comment

My view is that the signal is constantly emitted, so GC is in our past light cone, but it may be very remote so we still are not able to detect the signal. But if they control a large part of the visible part of the sky, they will be able to create something visible—so they either don’t want or not exist.

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=5cqjTvK8YoAHQ6rgg

I agree it seems plausible SIA favours panspermia, though my rough guess is that doesn’t change the model too much. Conditioning on panspermia happening (and so the majority of GCs arriving through panspermia) then the number of hard steps n in the model can just be seen as the number of post-panspermia steps. I then think this doesn’t change the distribution of ICs or GCs spatially if (1) the post-panspermia steps are sufficiently hard (2) a GC can quickly expand to contain the volume over which its panspermia of origin occurred. The hardness assumption implies that GC origin times will be sufficiently spread out for a single to GC to prevent any prevent any planets with m< n step completions of life from becoming GCs.

Comment

Yes, if "GC can quickly expand to contain the volume over which its panspermia of origin occurred", when we return to the model of intergalactic grabby aliens. But if the panspermia volume is relatively large and the speed of colonisation is relatively small, for each such volume there will be several civilizations which appear almost simultaneously. They will have age difference around 1 million years, the distance will be less than 100 kyl and they will arrive soon. We will encounter such panspermia-brothers long before we meet grabby aliens from other remote galaxies.

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=eTtbkd4gftNBxxuNE

[edited]

Comment

https://www.lesswrong.com/posts/iwWjBH2rBF7ExXeEG/replicating-and-extending-the-grabby-aliens-model?commentId=avtQCHsBpFSkKzyn7

Wouldn’t the respective type of utilitarian already have the corresponding expectations on future GCs? If not, then they aren’t the type of utilitarian that they thought they were. I’m not sure what you’re saying here. Are you saying that in general, a [total][average] utilitarian wagers for [large][small] populations? So there’s a lower bound on the chance of meeting a GC 44e25 meters away. Yep! (only if we become grabby though) Lastly, the most interesting aspect is the symmetry between abiogenesis time and the remaining habitability time (only 500 million years left, not a billion like you mentioned). What’s your reference for the 500 million lifespan remaining? I followed Hanson et al. in using in using the end of the oxygenated atmosphere as the end of the lifespan. Just because you can extend the habitability window doesn’t mean you should when doing anthropic calculations due to reference class restrictions. Yep, I agree. I don’t do the SSA update with reference class of observers-on-planets-of-total-habitability-X-Gy but agree that if I did, this 500 My difference would make a difference.