https://www.cold-takes.com/

Cold Takes About (+highlights) Most Important Century
About (+highlights) Most Important Century Subscribe (free) Subscribe (free) Subscribe (free) Cold Takes - All Possible Views About Humanity's Future Are Wild By Holden Karnofsky Jul 13, 2021 13 min read 2star

All Possible Views About Humanity's Future Are Wild

Audio also available by searching Stitcher, Spotify, Google Podcasts, etc. for "Cold Takes Audio" Today's world Transformative AI Digital people World of Misaligned AI World run by Something else or or Stable, galaxy-wide civilization

Summary:

Before I continue, I should say that I don't think humanity (or some digital descendant of humanity) expanding throughout the galaxy would necessarily be a good thing - especially if this prevents other life forms from ever emerging. I think it's quite hard to have a confident view on whether this would be good or bad. I'd like to keep the focus on the idea that our situation is "wild." I am not advocating excitement or glee at the prospect of expanding throughout the galaxy. I am advocating seriousness about the enormous potential stakes.

My view

This is the first in a series of pieces about the hypothesis that we live in the most important century for humanity. In this series, I'm going to argue that there's a good chance of a productivity explosion by 2100, which could quickly lead to what one might call a "technologically mature"1 civilization. That would mean that:

The "conservative" view

Let's say you agree with me about where humanity could eventually be headed - that we will eventually have the technology to create robust, stable settlements throughout our galaxy and beyond. But you think it will take far longer than I'm saying. A key part of my view (which I'll write about more later) is that within this century, we could develop advanced enough AI to start a productivity explosion. Say you don't believe that.

The skeptical view

The "skeptical view" would essentially be that humanity (or some descendant of humanity, including a digital one) will never spread throughout the galaxy. There are many reasons it might not:

Why all possible views are wild: the Fermi paradox

I'm claiming that it would be "wild" to think we're basically assured of never spreading throughout the galaxy, but also that it's "wild" to think that we have a decent chance of spreading throughout the galaxy. In other words, I'm calling every possible belief on this topic "wild." That's because I think we're in a wild situation. Here are some alternative situations we could have found ourselves in, that I wouldn't consider so wild:

This pale blue dot could be an awfully big deal

Describing Earth as a tiny dot in a photo from space, Ann Druyan and Carl Sagan wrote:

The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that, in glory and triumph, they could become the momentary masters of a fraction of a dot ... Our posturings, our imagined self-importance, the delusion that we have some privileged position in the Universe, are challenged by this point of pale light ... It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. This is a somewhat common sentiment - that when you pull back and think of our lives in the context of billions of years and billions of stars, you see how insignificant all the things we care about today really are. But here I'm making the opposite point. It looks for all the world as though our "tiny dot" has a real shot at being the origin of a galaxy-scale civilization. It seems absurd, even delusional to believe in this possibility. But given our observations, it seems equally strange to dismiss it. And if that's right, the choices made in the next 100,000 years - or even this century - could determine whether that galaxy-scale civilization comes to exist, and what values it has, across billions of stars and billions of years to come. So when I look up at the vast expanse of space, I don't think to myself, "Ah, in the end none of this matters." I think: "Well, some of what we do probably doesn't matter. But some of what we do might matter more than anything ever will again. ...It would be really good if we could keep our eye on the ball. ...[gulp]" Next in series: The Duplicator Subscribe Feedback Forum Use "Feedback" if you have comments/suggestions you want me to see, or if you're up for giving some quick feedback about this post (which I greatly appreciate!) Use "Forum" if you want to discuss this post publicly on the Effective Altruism Forum.

Notes


  1. or Kardashev Type III. ↩
  2. If we are able to create mind uploads, or detailed computer simulations of people that are as conscious as we are, it could be possible to put them in virtual environments that automatically reset, or otherwise "correct" the environment, whenever the society would otherwise change in certain ways (for example, if a certain religion became dominant or lost dominance). This could give the designers of these "virtual environments" the ability to "lock in" particular religions, rulers, etc. I'll discuss this more in future pieces (now available here and here). ↩
  3. I've focused on the "galaxy" somewhat arbitrarily. Spreading throughout all of the accessible universe would take a lot longer than spreading throughout the galaxy, and until we do it's still imaginable that some species from outside our galaxy will disrupt the "stable galaxy-scale civilization," but I think accounting for this correctly would add a fair amount of complexity without changing the big picture. I may address that in some future piece, though. ↩
  4. A logarithmic version doesn't look any less weird, because the distances between the "middle" milestones are tiny compared to both the stretches of time before and after these milestones. More fundamentally, I'm talking about how remarkable it is to be in the most important [small number] of years out of [big number] of years - that's best displayed using a linear axis. It's often the case that weird-looking charts look more reasonable with logarithmic axes, but in this case I think the chart looks weird because the situation is weird. Probably the least weird-looking version of this chart would have the x-axis be something like the logged distance from the year 2100, but that would be a heck of a premise for a chart - it would basically bake in my argument that this appears to be a very special time period. ↩
  5. This is exactly the kind of thought that kept me skeptical for many years of the arguments I'll be laying out in the rest of this series about the potential impacts, and timing, of advanced technologies. Grappling directly with how "wild" our situation seems to ~undeniably be has been key for me. ↩
  6. Spreading throughout the galaxy would certainly be harder if nothing like mind uploading (which I discuss in a separate piece, and which is part of why I think future space settlements could have "value lock-in" as discussed above) can ever be done. I would find a view that "mind uploading is impossible" to be "wild" in its own way, because it implies that human brains are so special that there is simply no way, ever, to digitally replicate what they're doing. (Thanks to David Roodman for this point.) ↩
  7. That is, advanced AI that pursues objectives of its own, which aren't compatible with human existence. I'll be writing more about this idea. Existing discussions of it include the books Superintelligence, Human Compatible, Life 3.0, and The Alignment Problem. The shortest, most accessible presentation I know of is The case for taking AI seriously as a threat to humanity (Vox article by Kelsey Piper). This report on existential risk from power-seeking AI, by Open Philanthropy's Joe Carlsmith, lays out a detailed set of premises that would collectively imply the problem is a serious one. ↩
  8. Thanks to Carl Shulman for this point. ↩
  9. See https://arxiv.org/pdf/1806.02404.pdf ↩