If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


On the occasion of the end of the old decade and the beginning of the new, Scott Alexander of Slate Star Codex, wrote a post titled What Intellectual Progress Did I Make in the 2010s. I am generally a great admirer of Alexander, in fact, though I don’t mention it often in this space I have been turning every one of his blog posts into an episode in a podcast feed since late 2017. In particular, I am impressed by his objectivity, his discernment, and dispassionate analysis. But in this particular post he said something which I take strong exception to:

In terms of x-risk: I started out this decade concerned about The Great Filter. After thinking about it more, I advised readers Don’t Fear The Filter. I think that advice was later proven right in Sandberg, Drexler, and Ord’s paper on the Fermi Paradox, to the point where now people protest to me that nobody ever really believed it was a problem.

I am not only one of those who once believed it was a problem, I’m one who still believes it’s a problem. And in particular it’s a problem for rationalists and transhumanists, which are exactly the kind of people Alexander most often associates with and therefore most likely to be the people who now protest that nobody ever really believed it was a problem. But before we get too deep into things, it would probably be good to make sure people understand what we’re talking about.

Hopefully, most people reading this post are familiar with Fermi’s Paradox, but for those who aren’t, it’s the apparent paradox between the enormous number of stars and the enormous amount of time they’ve existed, and the lack of any evidence for civilizations, other than our own, arising among those billions of stars over those billions of years. Even if you were already familiar with the paradox you may not be familiar with the closely related idea of the Great Filter which is an attempt to imagine the mechanism behind the paradox, and in particular when that mechanism might take effect. 

Asking what prevented anyone else from getting as far, technologically, as we’ve gotten, or most likely a lot father is to speculate about the Great Filter. It can also take an inverted form, when someone asks what makes us special. But either way, the Great Filter is that thing which is either required for a detectable interstellar presence or which prevents it. And what everyone wants to know is whether this filter is in front of us or behind us. There are many reasons to think it might be ahead of us. But most people who consider the question hope that it’s behind us, that we have passed the filter. That we have, one way or another, defeated whatever it is which prevents life from developing and being detectable over interstellar distances.

Having ensured we’re on the same page we can return to Alexander’s original quote above, where he mentions two sources for his lack of concern. First his own post on the subject: “Don’t Fear the Filter”, and second the Sandler, Drexler, Ord paper on the paradox.


Let’s start with his post. It consists of him listing four broad categories of modern risks which people hypothesis might represent the filter. Which would indicate both that the filter is ahead of us, and that we should be particularly concerned about the risk in question. Alexander then proceeds to demonstrate that these risks as unlikely to be the Great Filter. As I said, I’m a great admirer of Alexander, but he makes several mistakes in this post.

To begin with, he makes the very mild mistake of dismissing anything at all. Obviously this is eminently forgivable, he’s entitled to his opinion and he does justify that opinion, but given how limited our knowledge is in this domain, I think it’s a mistake to dismiss anything. To return to my last post, if someone had come to Montezuma in 1502 when he took the throne and told him that strangers had arrived from another world and that within 20 years he would be dead and his empire destroyed, and that in less than 100 years 95% of everyone in the world (his world) would be dead, he would have been dismissed as a madman, and yet that’s exactly what happened.

Second, his core justification for arguing that we shouldn’t fear the filter is that it has to be absolutely effective at preventing all civilizations (other than our own) from interstellar communication. He then proceeds to list four things which are often mentioned as being potential filters, but which don’t fulfill this criteria of comprehensiveness, because these four things are straightforward enough to ameliorate that some civilization should be able to do it even if ours ends up being unable to. This is a reasonable argument for dismissing these four items, but in order to decisively claim that we shouldn’t “fear the filter”, he should at least make some attempt to identify where the filter actually is, if it’s not one of the things he lists. To be charitable, he seems to be arguing that the filter is behind us. But if so you have to look pretty hard to find that argument in his post.

This takes me to my third point. It would be understandable if he made a strong argument for the filter being behind us, but really, to credibly banish all fear, even that isn’t enough. You would have to make a comprehensive argument, bringing up all possible candidates for a future filter, not merely the ones that are currently popular. It’s not enough to bring up a few x-risk candidates and then dismiss them for being surmountable. The best books on the subject, like Stephen Webb’s If the Universe Is Teeming with Aliens … WHERE IS EVERYBODY?: Seventy-Five Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life (which I talked about here) and Milan M. Ćirković’s The Great Silence: Science and Philosophy of Fermi’s Paradox (which I talked about here and my personal favorite book on the topic) all do this. Which takes me to my final point.

People like Ćirković and Webb are not unaware of the objections raised by Alexander. Both spend quite a bit of time on the idea that whatever is acting as the filter would have to be exceptionally comprehensive, and based on that and other factors they rate the plausibility of each of the proposed explanations. Webb does it as part of each of his 75 entries, while Ćirković provides a letter grade for each. How does he grade Alexander’s four examples?

  1. Nuclear War: Alexander actually includes all “garden variety” x-risks, but I’ll stick to nuclear war in the interests of space. Ćirković gives this a D.
  2. Unfriendly AI: Ćirković places this in category of all potential self-destructive technologies and gives the entire category a D+.
  3. Transcendence: Ćirković gives this a C-/F. I can’t immediately remember why he gave it two grades, nor did a quick scan of the text reveal anything. But even a C- is still a pretty bad grade.
  4. The Dark Forest (Exterminator aliens): Ćirković gives this a B+, his second highest rating out of all candidates. I should say I disagree with this rating (see here) for much the same reasons as Alexander.

With the exception of the last one, Ćirković has the same low opinion of these options as Alexander. And if we grant that Alexander is right and Ćirković is wrong on #4 which I’m happy to do since I agree with Alexander. Then the narrow point Alexander makes is entirely correct, everyone agrees that these four things are probably not the Great Filter, but that still leaves 32 other potential filters if we use Ćirković’s list, and north of 60 if we use Webb’s list. And yes, some of them are behind us (I’m too lazy to separate them out) but the point is that Alexander’s list is not even close to being exhaustive.

(Also, any technologically advanced civilization would probably have to deal with all these problems at the same time, i.e. if you can create nukes you’re probably close to creating an AI, or exhausting a single planet’s resources. Perhaps individually they should each get a D grade, but what about the combination of all of them?)

If I was being uncharitable I might accuse Alexander of weak-manning arguments for the paradox and the filter, but I actually don’t think he was doing that, rather my sense is that like many people with many subjects, despite his breadth of knowledge elsewhere, he doesn’t realize how broad and deep the Fermi’s Paradox discussion can get, or how many potential future filters there are which he has never considered.


Most people would say that the strongest backing for Alexander’s claim is not his 2014 post, but rather the Sandler, Drexler, and Ord study (SDO paper).

(Full disclosure: In discussing the SDO paper I’m re-using some stuff from an earlier post I did at the time the study was released.)

To begin with, one of Alexander’s best known posts is titled Beware the Man of One Study, where he cautions against using a single study to reach a conclusion or make a point. But isn’t that exactly what he’s doing here? Now to be fair, in that post he’s mostly cautioning against cherry picking one study out of dozens to prove your point. Which is not the case here, mostly because there really is only this one study, but I think the warning stands. Also if you were going to stake a claim based on a single study the SDO paper is a particularly bad study to choose. This is not to say that the results are fraudulent, or that the authors made obvious mistakes, or that the study shouldn’t have been published, only that the study involves throwing together numerous estimates (guesses?) across a wide range of disciplines, where, in most cases direct measurement is impossible. 

The SDO paper doesn’t actually center on the paradox. It takes as its focus Drake’s equation, which will hopefully be familiar to readers of this blog. If not, basically Drake’s equation attempts to come up with a guess for how many detectable extraterrestrial civilizations there might be by determining how many planets might get through all the filters required to produce such a civilization (e.g. How many planets are there? What percentage have life? What percentage of that life is intelligent? etc.). Once you’ve filled in all of these values the equation spits out an expected value for the number of detectable civilizations, which generally turns out to be reasonably high, and yet there aren’t any, which then brings in the paradox.

The key innovation the SDO paper brings to the debate is to map out the probability distribution one gets from incorporating the best current estimates for every parameter in the equation, and pointing out that this distribution is very asymmetrical. We’re used to normal distributions (i.e. bell curves) in which the average and the most likely outcome are basically the same thing, but the distribution of potential outcomes when running numbers through Drake’s equation are ridiculously wide and on top of that not normally distributed which means, according to the study, the most probable situation is that we’re alone, even though the average number of expected civilizations is greater than one. Or to borrow the same analogy Alexander does:

Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?

No. In this case the mean is meaningless. It’s not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.

As I said, it’s an innovative study, and a great addition to the discussion, but I worry people are putting too much weight on it, because the paper does some interesting and revealing math and it looks like science, when, as Michael Crichton pointed out in a famous speech at Stanford, Drake’s equation is most definitely not science. (Or if you want this same point without climate change denial you could check out this recent post from friend of the blog Mark.) The SDO paper is a series of seven (the number of terms in Drake’s equation) very uncertain estimates, run through a monte carlo simulator, and I think there’s a non-trivial danger of garbage in garbage out. But at a minimum I don’t think the SDO paper should generate the level of certainty Alexander claims for it. 

If this is right – and we can debate exact parameter values forever, but it’s hard to argue with their point-estimate-vs-distribution-logic – then there’s no Fermi Paradox. It’s done, solved, kaput. Their title, “Dissolving The Fermi Paradox”, is a strong claim, but as far as I can tell they totally deserve it.

His dismissal of parameter values is particularly hard to understand. (Unless he thinks current estimate ranges will basically continue to hold forever.) The range of values determines the range of the distribution. Clearly there are distributions where the SDO paper’s conclusion no longer holds. All it would take to change it from “mostly likely alone”, to “there should be several civilizations” would be a significant improvement in any of the seven terms or a minor improvement in several. Which seems to be precisely what’s been happening.


From 1961 when Drake’s equation was first proposed, until the present day, our estimates of the various terms has gotten better, and as our uncertainty decreased it almost always pointed to life being more common.

One great example of this, is the current boom in exoplanet discovery. This has vastly reduced the uncertainty in the number of stars with planets. (Which is the second term in the equation.) And the number of planets which might support life (the third term). The question is, as uncertainty continues to be reduced in the future, in which direction will things head? Towards a higher estimate of detectable civilizations or towards a lower estimate? The answer, so far as I can tell, is that every time our uncertainty gets less it updates the estimate in favor of detectable civilizations being more common. There are at least three examples of this:

  1. The one I just mentioned. According to Wikipedia when Frank Drake first proposed his equation, his guess for the fraction of stars with planets was ½. After looking at the data from Kepler, our current estimate is basically that nearly all stars have planets. Our uncertainty decreased and it moved in the direction of extraterrestrial life and civilizations being more probable.
  2. The number of rocky planets, which relates to the term in the equation for the fraction of total planets which could sustain life. We used to think that rocky planets could only appear seven billion years or so into the lifetime of the universe. Now we know that they appeared much earlier. Once again our uncertainty decreased, and it did so in the direction of life and civilizations being more probable.
  3. The existence of extremophiles. We used to think that there was a fairly narrow band of conditions where life could exist, and then we found life in underwater thermal vents, in areas of extreme cold and dryness, in environments of high salinity, high acidity, high pressure, etc. etc. Yet another case where as we learned more, life became more probable, not less.

But beyond all of this, being alone in the galaxy/universe reverses one of the major trends in science. The trend towards de-emphasizing humanity’s place in creation.

In the beginning if you were the ruler of a vast empire you must have thought that you were the center of creation. Alexander the Great is said to have conquered the known world. I’m sure Julius Caesar couldn’t have imagined an empire greater than Rome, but I think Emperor Yuan of Han would have disagreed.

But surely, had they know each other, they could agreed that between the two of them they more or less ruled the whole world? I’m sure the people of the Americas, would have argued with that. But surely all of them together could agree that the planet on which they all lived was at the center of the creation. But then Copernicus comes along, and says, “Not so fast.” (And yes I know about Aristarchus of Samos.)

“Okay, we get it. The Earth revolves around the Sun, not the other way around. But at least we can take comfort in the fact that man is clearly different and better than the animals.”

“About that…” says Darwin


“Well at least our galaxy is unique…”

“I hate to keep bursting your bubble, but that’s not the case either,” chimes in Edwin Hubble.

At every step in the process when someone has thought that humanity was special in any way someone comes along and shows that they’re not. It happens often enough that they have a name for it, The Copernican Principle (after one of the biggest bubble poppers). Which, for our purposes, is interchangeable with the Mediocrity Principle. Together they say that there is nothing special about our place in the cosmos, or us, or the development of life. Stephen Hawking put it as follows:

The human race is just a chemical scum on a moderate-sized planet, orbiting around a very average star in the outer suburb of one among a hundred billion galaxies.

This is what scientists have believed, but if we are truly the only intelligent, technology using life form in the galaxy or more amazingly the visible universe, then suddenly we are very special indeed. 


As I mentioned the SDO paper, despite its title, is only secondarily about Fermi’s Paradox. It’s actually entirely built around Drake’s Equation, which is one way of approaching the paradox, but one that has significant limitations. As Ćirković says, in The Great Silence:

In the SETI [Search for Extraterrestrial Intelligence] field, invocation of the Drake equation is nowadays largely an admission of failure. Not the failure to detect extraterrestrial signals—since it would be foolish to presuppose that the timescale for the search has any privileged range of values, especially with such meagre detection capacities—but of the failure to develop the real theoretical grounding for the search.

Ćirković goes on to complain that the equation is often used in a very unsophisticated fashion, and in reality it should be “explicated in terms of relevant probability distribution functions” and to be fair, that does appear to be what the SDO paper is attempting, whether they’re succeeding is a different matter. Ćirković seems to be suggesting a methodology significantly more complicated than that used by the study. But, this is far from the only problem with the equation. The biggest is that none of the terms accounts for interstellar travel by life and civilizations to planets beyond those where they arose in the first place. 

The idea of interstellar colonization by advanced civilizations is a staple of science fiction and easy enough to imagine, but most people have a more difficult time imagining that life itself might do the same. This idea is called panspermia, and from where I sit, it appears that the evidence for that is increasing as well. On the off chance that you’re unfamiliar with the term, panspermia is the idea that life, in its most basic form, started somewhere else and then arrived on Earth once things were already going. Of greater importance for us is the idea that if it could travel to Earth there’s a good chance it could travel anywhere (and everywhere). In fairness, there is some chance life started on say, Mars and travelled here, in which case maybe life isn’t “everywhere”. But if panspermia happened and it didn’t come from somewhere nearby, then that changes a lot.

Given the tenacity of life I’ve already mentioned above (see extremophiles) once it gets started, there’s good reason to believe that it would just keep going. This section is more speculative than the last section, but I don’t think we can rule out the idea, and it’s something Drake’s equation completely overlooks, and by extension, the SDO paper. That said, I’ll lay out some of the recent evidence and you can decide where it should fit in:

  1. Certain things double every so many years. The most famous example of this phenomenon is Moore’s Law, which says that the number of transistors on an integrated circuit doubles every two years. A while back some scientists wanted to see if biological complexity followed the same pattern. It did, doubling every 376 million years. With forms of life at the various epochs fitting neatly onto the graph. The really surprising thing was that if you extrapolate back to zero biological complexity you end up at a point ten billion years ago. Well before the Earth was even around (or Mars for that matter). Leaving Panspermia as the only option. Now the authors confess this is more of a “thought exercise” than hard science, but that puts it in a very similar category to Drake’s equation. And there’s an argument to be made that the data for the doubling argument is better.
  2. There’s a significant amount of material travelling between planets and even between star systems. I mentioned this in a previous post, but to remind you. Some scientists decided to run the numbers, on the impact 65 million ago that wiped out the dinosaurs. And they discovered that a significant amount of the material ejected would have ended elsewhere in the Solar System and even elsewhere in the galaxy. Their simulation showed that around 100 million rocks would have made it to Europa (a promising candidate for life) and that around a 1000 rocks would have made it to a potentially habitable planet in a nearby star system (Gliese 581). Now none of this is to say that any life would have survived on those rocks, rather the point that jumps out to me is how much material is being exchanged across those distances.
  3. Finally, and I put this last because it might seem striking only to me. Apparently the very first animal (as in the biological kingdom Animalia) had 55% of the DNA that humans have. They ascribe this to an “evolutionary burst of new genes”, but for me that looks an awful lot like support of the first point in this list. The idea that life has been churning along for a lot longer than we think, if the first animal had 55% of our DNA already half a billion years ago.

Now, of course, even if panspermia is happening, that doesn’t necessarily make the SDO paper wrong. You could have a situation where the filter is not life getting started in the first place, the filter is between any life and intelligent life. It could be that some kind of basic life is very common, but intelligence never evolves. Though before I move on to the next subject, in my opinion that doesn’t seem likely. You can imagine that if life itself has a hard time getting started, in any form, that out of the handful of planets with life, that only one develops intelligence. But if panspermia is happening, and you basically have life on every planet in the habitable zone, a number estimated at between 10 and 40 billion, then the idea that out of those billions of instances of life that somehow intelligence only arose this one time seems a lot less believable. (And yes I know about things like the difficulty of the prokaryote-eukaryote transition.)


The final reason I have for being skeptical of the conclusion of the SDO paper is that as far as I can tell they give zero weight to the fact that we do have one example of a planet with intelligent life, and capable of interstellar communication: Earth. In fact if I’m reading things correctly they appear to give a pretty low probability that even we should exist. My sense is that when it comes to Fermi’s paradox this is the one piece of evidence no one knows exactly how to handle. On the one hand, as I pointed out, the history of science has been inextricably linked to the Copernican principle. The idea that Earth and humanity are not unique, and yet on this one point the SDO paper make the claim that we are entirely unique, that there is probably not another example of detectable life anywhere in our galaxy of 250 billion stars. 

You might think there is no, “On the other hand”, but there is. It’s called the anthropic principle, which says there’s nothing remarkable about our uniqueness, because only our uniqueness allows it to be remarked upon. Or in other words, conscious life will only be found in places where conditions allow it to exist, therefore when we look around and find that things are set up in just the right way for us to exist, it couldn’t be any other way because if they weren’t set up in just the right way no one would be around to do the looking. There’s a lot that could be said about the anthropic principle, and this post is already quite long. But there are three points I’d like to bring up:

  1. It is logically true, but logically true in the sense that a tautology is logically true. It basically amounts to saying I’m here because I’m here, or if things were different, they’d be different. Which is fine as far as it goes, but it discourages further exploration and a deeper understanding of why we’re here, or why things are different, rather than encouraging it.
  2. To be fair, it does get used, and by some pretty big names. Stephen Hawking included it in his book A Brief History of Time, but Hawkings and others generally use it as an answer to the question of why all the physical constants seemed fine tuned for life. To which people reply there could be an infinite number of universes, so we just happen to be in the one fine tuned for life. Okay fine, but there’s no evidence that the physical constants we experience don’t apply to the rest of the galaxy. The only way it makes sense for Fermi’s Paradox is to argue that our Solar System, or the Earth is fine-tuned for intelligent life. Or that we were just insanely, ridiculously lucky. 
  3. It’s an argument from lack of imagination. In other words, critics of the paradox assert that we are alone because there has not been any evidence to the contrary. But it is entirely possible that we have just not looked hard enough, that our investigation has not been thorough enough. On questioning they will of course admit this possibility, but it is not their preferred explanation. Their preferred explanation is that we’re alone and the filter is behind us, and they will provide a host of possibilities for what that filter might be, but we really know very little about any of them. 

As you might have gathered, I’m not a very big fan of the anthropic principle. I think it’s a cop out. Perhaps you don’t, perhaps, on top of that, you think the idea of panspermia is ridiculous. Fair enough, my project is not to convince you that the anthropic principle is fallacious, or that panspermia definitely happened. My project is merely to illustrate that it’s premature to say that the Great Filter is behind us, that the Fermi Paradox is “solved” or “kaput”. And all that requires is that any one of the foregoing pieces of evidence I’ve assembled ends up being persuasive. 

Beyond all this there is the question we must continually revisit, in which direction is the error worse? If the Great Filter is actually behind us but out of an abundance of caution we spend more effort than we would have otherwise on x-risks, that’s almost certainly a good thing. In particular since there are plenty of x-risks which could end our civilization which are nevertheless not the Great Filter. Accordingly, any additional effort is almost certainly a good thing. On the other hand, if the Great Filter is ahead of us, then the worst thing we could do is dismiss the possibility entirely, and dismissing it on the basis of a single study might be the saddest thing of all.

Much like with Fermi’s Paradox, everyone reading this assumes that if they’re intelligent enough to appreciate this post, then there must be other readers out there somewhere who share the same intelligent appreciation, but what if there’s not, what if you’re the only one? Given that this might be the case wouldn’t it be super important for you, as the only person with that degree of intelligence to donate