Month: <span>January 2018</span>

Speech as Censorship

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

Many years ago, back when I still had a normal 9-5 job, they called a meeting for the entire IT Department. They had these meetings about once a quarter, but on this occasion, a significant level of dissatisfaction had built up around wages and salary. It was widely thought that the company was underpaying its technical staff when compared to the other companies in the area. In order to get ahead of the grumbling, the CFO decided to attend the meeting and explain things.

I’m not sure what the CFO expected to accomplish, because he came to the meeting with a really weak hand. They weren’t prepared to announce any big changes to the way they determined salaries, but they did announce that the amount we paid for health insurance every month was going up. In other words, there was significant bad news and no good news.

The one thing the CFO did have to fall back on was that they had been very good about giving annual raises. And as I recall he made a big point of emphasizing that. The problem was that the size of these annual raises was such that they were more cost of living adjustments than actual raises. Which is to say that if they give you a 2% raise, and inflation is at 2% then you wouldn’t expect your purchasing power to change very much, if at all. And, of course, that assumes that your costs don’t go up. Which we had just been informed, in that very same meeting, was not the case. What this meant is that, once you combined the increased healthcare costs with a “raise” that was very close to the rate of inflation, you could quite easily end up making less in absolute terms after all was said and done. This was particularly true for those near the bottom end of the salary scale, where healthcare was a bigger percentage of their gross. (I certainly wasn’t at the lowest end of the scale, but I was definitely in the bottom 50% at this point.)

Now if I had been smart, I would have kept this observation to myself, and not said anything, but keeping my mouth shut has never been my forte, so I brought it up. This did not go over well with the CFO who, as I indicated, was already kind of floundering. After stammering for a bit the only retort he could come up with was that they had never reduced our salaries when inflation was negative. This would have been an excellent point, if inflation ever went negative, which it doesn’t (at least not during the entire time the company had been in business). In fact, given that this happened somewhere in the 2004-2006 range when inflation was hovering at or above 3%, this retort was particularly pointless. In any event, I didn’t point this out, I was at least that smart.  

The reason it wasn’t smart on my part is that the whole incident was enough for the CFO to strongly suggest that I shouldn’t get a raise at all the next time the subject came up. But I guess my boss told him that I would quit if I didn’t get it, and that that would be bad, so it was eventually approved, over the CFOs objection.

I tell this story because we’re going to be talking about free speech, and this incident is a great example of how speech used to work:

  • There was always a well defined power structure. There’s a few people at the top, and a mass of people below them. In the story you have the CFO and the employees. Before the internet you had politicians on top with newspapers and TV acting as the voice of the masses.
  • The number of things people cared about were fewer but held onto less passionately, and for many of these concerns there was wide agreement even at the different levels. Neither the executives nor the employees wanted the company to go out of business. From the perspective of the press, previously things like the evening news had a vast audience of people who all felt loosely connected to everything that was being said.
  • Given that there were fewer concerns with wider agreement anything being said was likely to be of interest to more people. No one else was stupid enough to bring up the idea of wages staying ahead of inflation, but lots of my co-workers congratulated me for bringing it up afterwards.
  • There was a limited number of venues where this speech could take place. In my example it largely only took place once a quarter during the general meeting. In the pre-internet age we had the limited venues of newspapers, TV and radio.
  • Censorship was easy. If they could have shut me up then likely no one else would have brought up that point, and as I said if I had been smarter even I wouldn’t have brought it up. I confess to not having seen the movie The Post, but as I understand it, the whole point was that if the government could have stopped the Washington Post and the New York Times from printing the Pentagon Papers, then it would have been over. I doubt anyone would try the same thing now.

As I said, this is how speech used to work. But this is not how it currently works. Now, rather than having the executives and employees all meet in a single room once a quarter to listen to what each other has to say, there are countless rooms, and the meetings don’t happen once a quarter they happen all day every day. And rather than a format where the executive stands up in front of people and takes questions, every person can say just about anything at anytime, and everyone has a megaphone, though some megaphones are bigger than others. Oh, and did I also mention that some of the people wear masks?

On top of all this, if you’ll allow me to stretch the metaphor even further (though hopefully not too far), Facebook and Google own most of the buildings and once you’re inside they usher you to the room they think you’ll like the best, because it’s a room full of people who think exactly the same way you do. Now of course you don’t have to go into the Facebook or Google buildings, but that’s where most of the people are, and if you want to either hear what’s happening from, or talk about what’s happening to, a bunch of people that’s the fastest way to do it.

In the good old days I was a free speech absolutist. And it’s entirely possible that this was a factor in pointing out the possibility that wages might not be keeping up with inflation. After all, if we follow the metaphor, being a free speech absolutist, meant I felt that the executives should have lots of meetings and that at those meetings people should be able to ask any question or raise any point they wanted to. Which is what I did.

Though even back then, my support was not so absolute that I thought you should be able to shout “fire” in a crowded theater. Nor did I think that violent threats should be protected. But beyond that I felt that more speech is more better. The question which is before me now, is do I still think that? The answer is probably, but there are some doubts starting to creep in.

I’ve written about free speech before, and the place that social media currently occupies in the debate, but it was once again brought to my attention by a recent issue of Wired, billed as The Speech Issue. The general position Wired takes in the issue can probably best be summed up by the title of their featured article, The {Divisive, Corrosive, Democracy-Poisoning} Golden Age of Free Speech. Which, even if you know exactly what problems they’re talking about, still grabs your attention, particularly if you’re a free speech absolutist.

This is actually fairly well-covered territory, but despite that I think the article framed some of the issues in a way I hadn’t considered before, particularly with respect to the issue of censorship. When considering speech, there’s actually two ways to approach it, you can be in favor of speech or you can be against censorship. In both cases what you’re really hoping for is that good ideas, good policies, and more broadly, truth, will rise to the surface. If you’re focused on the speech side of the equation, as Americans have been ever since the First Amendment was passed, then you worked to create a situation where you had as many ideas out there as possible. Which is not to say censorship didn’t come up, but the issue was always framed as being pro-speech, not anti-censorship. Also censorship, particularly government censorship, has, historically, been easy to identify and deal with.

Under the old standard for speech of “the more the merry” we are, as the Wired article points out, in a “Golden Age”, but it doesn’t seem to be working out quite as well as people hoped. And that’s because, when viewed from the other side, the enormous quantity of free speech has ended up creating another form of censorship. From the article:

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Thus, we very well might have ended up in a situation where the speech side of things looks great, but the censorship side of things is as bad as it’s ever been, and possibly precisely because of the quantity of speech.

I think the point about not setting “off any First Amendment alarm bells” was particularly interesting, because it describes the current situation as being one where none of the old rules work. Which is probably more accurate than I’m comfortable with.

(As an interesting aside I’ve been listening to the Revolutions Podcast, (which I highly recommend). And currently he’s covering the various revolutions that took place in Europe starting in 1848. While listening, I was struck by how freedom of speech and of the press was a major demand in all of those revolutions. In some cases people were willing to keep the king or emperor around, but they wouldn’t budge on freedom of the press.)

All of this leaves us asking, what happens when you take something which has been core to the modern, enlightened, western world and say it no longer works. That in fact things might have flipped and be working in the exact opposite fashion from how they always have.

Identifying this switch is useful but I’m not sure how far it gets us. In addition to switching our focus from speech to censorship, we’re also forced to redefine what we mean by censorship. If currently censorship consists of an “epidemic of disinformation”, then one idea is to boost the signal of good information. Both Google and Facebook latched on to this idea and started to fact-check things, which proved to be less effective than expected.  Google came under so much fire for bias that they cancelled their efforts just a few days ago. And Facebook’s efforts have not met with much success either.

Even if things had worked, you still have a situation where some third party is choosing what people see and what they don’t see, which is basically still censorship. It may take a slightly different form, and come about from slightly different motives, but the end result is largely the same. Do you really want Facebook and Google deciding what’s true and what’s not? Deciding that something you just posted is garbage, and that accordingly no one should see it? Deciding when “viral outrage” is justified and when it’s not? (The correct answer on this one might be that it’s never justified.)

Speaking of “viral outrage”, at this point everyone seems to agree that if it’s ever justified it’s justified against members of white supremacist groups and more broadly members of the alt-right. But how can we be sure that this new variant of censorship is justified here, but not elsewhere? I know it seems pretty clear cut, but even so, as always the question that’s raised is, how do we know that this form of censorship will only be used against “bad” guys? With that consequence in mind it might be appropriate to look at the trade-off. If we’re running a big risk on the one hand by allowing this sort of censorship how big is the risk on the other hand, the risk we’re hoping to eliminate? In this respect the recent issue of Wired was useful as well, because it contained an entire infographic title “The Reach of Hate Online”. Here’s what it had to say:

Headlines (and timelines) can make it seem like a rising tide of white supremacists, Nazis and trolls is one “free speech rally” away from swamping us all. To be sure, hate in America is more visible than it’s been in decades, helped along by the internet–the factory where bots, propaganda, and real people slosh around together in an agita-inducing flurry. But all that makes it easy to overestimate how many far-right extremists there are. We pulled together some numbers and facts to put the hate-mongering into context.

Among the facts which jump out:

-There are 27 million bots on Twitter, of those 2,752 have been identified as Russian Bots. (If you do the math that’s 0.01%.)

-Looking at the sub-reddits most associated with the far right the number of subscribers to the biggest (r/The_Donald) is 538,762 while r/aww a subreddit dedicated to cute animals sits at 16,360,969

-Finally if we look at the two biggest free speech rallies, Charlottesville and a rally shortly after that, in Boston. The number of demonstrators was always completely overwhelmed by the number of counter demonstrators. The Charlottesville rally was answered by 130 counter rallies held all over the nation the very next day. And the Boston free speech rally had 25 “far right demonstrators in attendance” as compared to 40,000 counter-protestors.

Above, in a quote from the Wired article,  the author talked about the “unbearable and disproportionate cost on the act of speaking out”. I think if there are going to be 1,600 counter-protestors for every far-right individual in Boston, that might represent the “unbearable and disproportionate cost” the author was talking about.

Now let’s grant that these 25 individuals were the worst of the worst. People totally without any redeeming qualities whatsoever (except bravery apparently.) And that nothing they wanted to say was worth listening to. First, we still have the possibility that we’ll eventually have 40,000 people show up and shout down 25 people who do have something worth saying. And second, if these 25 people are the most extreme individuals in the movement, what is going on with anyone more moderate than them?

I mentioned this phenomenon in my last post. If the extremists are the only ones showing up, then to the extent that they are allowed to speak, they dominate the discussion. Additionally, you may end up with a lot of people who, because of the “unbearable and disproportionate cost” of speaking out keep their feelings to themselves, revealing them only when they’re standing in the voting booth, and then to the surprise of everyone, boom, you’ve got Trump. Given all this. are we sure that viral outrage is really the best strategy, even if it’s being used against literal Nazis? Finally, to return to the point I started with, when you consider the numbers, is the far-right risk really as great as people think? Is it so large that it justifies unleashing this potentially terrifying weapon?

What is the solution then? Returning to the article, it’s depressingly short on suggestions for how to proceed. And the suggestions they do offer mostly involve the government intervening in some ill-defined manner:

But we don’t have to be resigned to the status quo. Facebook is only 13 years old, Twitter 11, and even Google is but 19. At this moment in the evolution of the auto industry, there were still no seat belts, airbags, emission controls, or mandatory crumple zones. The rules and incentive structures underlying how attention and surveillance work on the internet need to change. But in fairness to Facebook and Google and Twitter, while there’s a lot they could do better, the public outcry demanding that they fix all these problems is fundamentally mistaken. There are few solutions to the problems of digital discourse that don’t involve huge trade-offs—and those are not choices for Mark Zuckerberg alone to make. These are deeply political decisions. In the 20th century, the US passed laws that outlawed lead in paint and gasoline, that defined how much privacy a landlord needs to give his tenants, and that determined how much a phone company can surveil its customers. We can decide how we want to handle digital surveillance, attention-­channeling, harassment, data collection, and algorithmic decision­making. We just need to start the discussion. Now.

As you can see there isn’t anything you would point to as a concrete policy proposal. Mostly they offer up a lot of analogies to past regulations and then press for a discussion to begin. Many of the analogies deal with previous ways in which dangers were mitigated. But let’s take a moment and run with that. He mentions seatbelts, and lead paint. In both of these cases you could perform studies, and gather statistics which showed people being harmed. I don’t know that you can perform the same kinds of studies or gather similar statistics when it comes to speech. Also there are very few benefits to not wearing seatbelts, and you can make paint without lead. But as I have pointed out, the same is not true for speech, there are many benefits to free speech, not the least of which may be that it’s not possible to have a non-free-speech democracy (whatever Russia and China might say.)

Their other analogies revolve around privacy, specifically, landlords and phone companies, And while initially, this seems like a more promising avenue, the author completely overlooks the trade-offs. Privacy entails keeping certain things private, as in, unknown. I hope it’s not too much of a stretch to see where any increase in privacy entails a corresponding increase in anonymity. The problem being that when Wired talks about the current divisiveness and corrosion accompanying modern free speech, most people agree that anonymity largely contributes to both.

As part of the issue there is an entire other article about Megan Squire, a woman who spends all of her free time attempting to uncover white supremacists and fascists. Frequently she then passes the identifying information over to the Antifa. Wired seems to be largely in favor of this endeavor. And one might assume from context that they’re actually offering it up as a potential solution. But here we once again encounter the conflicted landscape of free speech. If, as the issue claims elsewhere, censorship currently takes the form of “coordinated harassment campaigns”, “epidemics of disinformation” and tactics meant to “swamp…attention”. How does removing the anonymity of a single individual do much to stop this censorship, rather than being an example of this form of censorship? And how does this dovetail into the privacy regulations they allude to in the other article?

I think as with so many things the correct strategy lies somewhere in the middle. I certainly agree that widespread anonymity has been a big contributor to making the internet the cesspool it currently is. But completely eliminating all privacy, or stripping away anonymity from individuals and then passing them over to the Antifa doesn’t feel like great strategy either.

The upshot of all of this is that there aren’t any great strategies. If I were forced to offer up an idea, the only one I have is that most services should have some kind of circuit breaker, something which turns things off if anyone person starts getting too much attention. (This at least would have made Justine Sacco’s life better.) But don’t bother telling me about all the difficulties and issues inherent in that, I can already imagine them.

In answer to the question of whether I am still a free speech absolutist, I still mostly am, but I think I’m going to pay more attention to the censorship side of things. Perhaps now it would be more accurate to call me a anti-censorship absolutist.

In closing, all that can truly be said is that we’re in new territory here. Greater connectivity has led to less effective coordination. Privacy is important, but it might also cause people to behave worse. Allowing as much speech as possible is no longer a defense against censorship, it’s a vector for censorship. And above all, the genie is out of the bottle, making one thing clear at least, we can’t go back to the way it was.

I know I said there were no great strategies, but if you want to encourage me to keep thinking about it, in the hopes I can come up with one, consider donating.

Dispatches from my Death Bed

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

I know that my readers like to imagine that I live in a remote cabin, deep in a snowy forest, surrounded by books. And that every week, I lovingly craft an erudite missive which gets sent out into the world, unsullied by mundane concerns like family emergencies or poor health or a day job. I guess it’s more accurate to say, that this is what I imagine, my readers imagine, it’s like. But if there are any of you who do imagine things this way, you should disabuse yourself of that notion. As it turns out, I am subject to the same frailties as everyone else, and this week that frailty is (as far as I can tell) the flu, which is apparently particularly bad this season. In fact, a friend of mine, a fine strapping lad in his late 30s actually ended up being hospitalized for several days, his flu was so bad. (I don’t think that’s how I got it. I read about it on Facebook. I haven’t actually seen him in months.)

What does this mean? It means that the clockwork precision of releasing a long form essay every week might occasionally be derailed by, as they say “events on the ground”. In this case I’m still going to put out a post for the week, It’s just going to a) be shorter than usual and b) cover many topics shallowly rather than covering one in depth. Mostly because this requires less energy.

The New LDS First Presidency

First, given that this is ostensibly an LDS blog, I should probably spend some time talking about the new First Presidency. My prediction (which I didn’t register beforehand, so I can’t really get credit it for it, which is okay because it was mostly wrong anyway) was that the new president, President Nelson, wouldn’t keep the two counselors of his predecessor, President Monson. And had I left it there I would have been correct, but I went on to predict that he would keep Uchtdorf as a counselor and replace Eyring. Since it looked like Eyring was slowing down. But in fact he did the opposite, he kept Eyring (though moving him from First to Second Counselor) and brought in Dallin H. Oaks. Historically new LDS Presidents have kept the counselors of their predecessors, so predicting that he wouldn’t should get me some credit, but, of course, as is so often the case, I got greedy, and tried for a really impressive prediction, and ended up flat on my face.

Thus far, if you’re LDS you probably already know all of this (well not the part about my overly prideful predictions) and if you’re not LDS you’re probably not that interested. Let me try and make it interesting for both groups. Of course the number one subject everyone wants to bring up when talking about the leadership of the Mormon Church is the issue of LGBT members. And when people outside the church bring it up they do it almost exclusively as a criticism. In fact many members felt that the New York Times Obituary of the late President Monson, was less an obituary and more another criticism of the LDS Church’s LGBT policies. A petition asking the Times to rewrite the obituary got 190,000 signatures, and enough support that the Times did issue a statement defending the obituary.

Of course the criticism didn’t end with President Monson’s passing, and the very first question asked of the new leadership at the press conference following the announcement was how they would handle LGBT issues. It’s a safe bet that most people outside the Church found the answer inadequate. Additionally those engaged in trying to “read the tea leafs” so to speak, view the replacement of Elder Uchtdorf by Elder Oaks as a bad sign:

…Some observers see the leadership shuffle as a definite turn toward retrenchment.

“President Nelson is seen as a conservative, and Elder Uchtdorf as a centrist and even potentially a reformer, so many people will see this as sending a message about the direction of the church under President Nelson’s leadership,” Mason said. It may “indicate, and almost certainly will actually represent, a rightward turn in the First Presidency.”

Steve Taysom, a Mormon who teaches philosophy and comparative religion at Cleveland State University, echoed that sentiment.

Uchtdorf has been “an important symbolic presence for liberal-minded Mormons, and he clearly did not share President Nelson’s and Elder Oaks’ views on social activism in general and LGBTQ issues in particular,” Taysom wrote in an email. “This is going to be an administration characterized by social and ecclesiastical retrenchment, and the touchstone for this retrenchment appears to be the 1995 family proclamation, which, of course, Oaks spoke about in the last General Conference.”

For the moment I’m going to set aside a discussion of how someone can both believe that the LDS Church is being led by God and believe it’s completely wrong in how it treats the LGBT community, and consider, instead, where things might be headed. Broadly there are three possibilities:

1- The societal pendulum could swing back. There could be a backlash against LGBT rights, specifically marriage, and the pressure on the Church could go away. I think this is extremely unlikely.

2- An equilibrium could be reached. The LDS Church could get enough credit for its emphasis on loving and accepting LGBT members (a point the new leadership kept coming back to at the press conference) that the fact that they still believe marriage is only between a man and a woman could be accepted, if not necessarily embraced by those outside the church. This is my preferred solution.

3- The pressure could continue to ratchet up moving from individuals and organizations to the government, until sooner or later they’re re-enacting the measures used against the church in the 1880’s when the government was attempting to stamp out polygamy. This option seems increasingly likely.

Steven Pinker and the Moment We Live In

Recently Steven Pinker (who I’ve mentioned frequently in this space) was part of a panel where he discussed the idea that there are certain facts which are unmentionable on college campuses, in mainstream media, or in “polite” society in general. He then went on to point out, that as a consequence of this, when these facts are encountered it is only outside of an academic setting, in more fringe media outlets and among people who don’t care about politeness. This results in basically only one side of the debate using these facts in their arguments, meaning they get to extrapolate a worldview from those facts which ends up going largely unchallenged, because the other “side” won’t even acknowledge the facts in the first place.

As Pinker points out, this has several additional consequences. To begin with, it makes it look like academia and the mainstream media are hiding something, that the reason they won’t acknowledge the facts is that they have no answer for them. As you can imagine, this immediately gives enormous power to the side which isn’t afraid to acknowledge the facts, because they are immediately perceived as being on more solid evidentiary footing. They’re dealing with reality as it is, not as they wish it to be. Additionally it undercuts the perceived reliability of academia and the mainstream media, and makes it more difficult for them to accuse other people of fake news, when it’s clear they are engaged in the avoidance of facts themselves. Which is certainly a form of fake news. And finally, I think, in Pinker’s view the most worrying, it leaves people with no defenses against those arguments because no one is making a compelling argument for the other side because to do so they would have to acknowledge the facts in the first place. Pinker then goes on to contend that it is better to acknowledge those facts and that there are perfectly reasonable and academic friendly ways to do so and great counter-arguments to be made.

Despite this final point, his speech, as you might imagine, was not taken in the manner Pinker hoped (which is not to say that it wasn’t taken in the manner he expected.) Numerous people on social media jumped down his throat leading the New York Times to start an article about the controversy by saying that, “Steven Pinker is a liberal, Jewish professor. But social media convinced people that he’s a darling of the alt-right.” Of particular note was PZ Myers, who used to be pretty tight with the rest of the new atheists, but now apparently can’t stand them, and had this to say about Pinker, “Welp, that settles it, Steven Pinker is a lying right-wing shitweasel.”

You are of course welcome to watch the clip and decide for yourself whether Pinker is guilty as charged, but I personally think he’s hit on something profound, and it dovetails quite well with my own explanation for the victory of Trump. For many years, the majority of Americans have been worried about illegal immigration. (Fun fact the high was 72% in 2006.) And yet for a variety of reasons both Republicans and Democrats ignored the issue. This went on year after year, leading, I can only assume, to mounting frustration on the part of the voters. Meaning that when someone came along who finally was willing to talk about it, it almost didn’t matter who he was, people were so hungry for someone at least acknowledge their concerns, that basically nothing else about the guy mattered. And yes, I’m talking about Trump.

I understand that immigration does not occupy the same place as the facts Pinker mentioned, but you see some of the same things happening. Avoidance of a subject translating into a perception of illegitimacy (it’s no coincidence that trust in the media is at its lowest level ever.) The argument being mostly dominated by one side. And a lack of any sort of immunity among people who hear the argument from only that one side, (particularly since, in my opinion that side has a lot of merit.)

Once again I wonder what will happen. And this time I see four possibilities.

1- The populist moment passes. Trump is removed from office or fails to get re-elected. Things return to a pre-2016 state. This is another possibility that strikes me as incredibly unlikely.

2- Populist movements continue to gain steam, perhaps on both sides, but with more warning the globalist wings of the parties manage to continually defeat them in the primaries, and much like the Tea Party, things simmer, but never really reach a boil.

3- One of the parties (presumably the Republicans, but who can tell) adopts populism as it’s primary platform, and ends up with someone much more polished than Trump as it’s presidential candidate. (Leading to a sort of winner take all election?)

4- We get a new third party which overtakes one of the current parties, driving them into irrelevance.  I also think that this is unlikely, but it’s definitely the most interesting option.

The Latest From the Pervnado

Of course the Pervnado continues to spin. Most recently there was quite a bit of noise being made over some allegations against Aziz Ansari. Also, people are finally starting to turn against Woody Allen. And finally, Catherine Deneuve along with 100 other french women put out a letter saying the #MeToo movement has turned into a “Witch Hunt”. I think the tornado metaphor becomes more and more apt, since as you can tell the winds appear to be blowing in every direction, and once again this is another place where it’s hard to know where things are headed.

Taking the incidents in order. The Ansari incident operated on a lot of different layers. Some women came to his defense. Others were quite angry with Ansari, particularly since he was wearing a Times Up pin at the Golden Globes and has otherwise been very outspoken in his opposition to harassment. In this instance, it does not appear that he is in any danger of losing his current gigs, though it’s still early. And the lack of a general consensus on how bad his offense should be treated seems to have created something of an impasse. Which is perhaps where it will remain. I will say in my opinion, it’s good that there’s a debate going on. As I think I indicated we’re more likely to arrive something which doesn’t neglect either justice or mercy if both of those principles actually have a seat at the table.

I mentioned Woody Allen in a previous post, in particular I expressed surprise that the Pervnado had not swept him up as well. Of course now that it’s beginning to there are some interesting aspects to this story as well. I had always kind of assumed he was a pervert. Not only was I aware of the Dylan Farrrow accusations from when they surfaced in 2013 but there was of course the whole Soon Yi scandal (though, interestingly, they’re still together) and on top of all of that I was watching his best known movie, Annie Hall, and there’s a scene where Woody Allen calls one of his friends to bail him out and the friend expresses how much of a sacrifice it was because at the time he was fooling around with two 16 year old twins, as I recall, he then repeats their age for emphasis.

All of this is to say that it seemed pretty obvious Woody Allen was a pervert. Obvious enough that if people were still working with him it could only be because they had heard the accusations and decided to overlook them, or decided that there was some room for doubt. And indeed there is. When you look beyond the unsettling things I mentioned, there’s apparently only one accusation of something illegal. (Please correct me if I’m wrong about this.) Which came during a messy divorce, was investigated by the police and dismissed, and finally, not even all of Dylan Farrow’s siblings believe it happened. I maintain he’s a pervert, and I’m not a big fan of his movies either, but you start to see where figuring out what’s actually going on becomes really difficult and being completely above suspicion (see the Pence Rule again) might be the only way to go.

Finally there’s the Deneuve letter. It’s in French and all the women who signed it are French, and I wonder if this speaks to a cultural divide with things. How much of the #MeToo movement is universal and how much is it an American movement? Particularly out of those parts which it’s opponents point to as excessive?

As far as the content, the letter mentioned many of the things which have been brought up in other venues:

The letter clearly condemns rape, but also draws a line. “Rape is a crime, but trying to seduce someone, even persistently or cack-handedly, is not – nor is men being gentlemanly a macho attack,” read the letter, translated from French. “Men have been punished summarily, forced out of their jobs when all they did was touch someone’s knee or try to steal a kiss.”

(Of course my question is how do you say cack-handedly in French?)

Beyond that it mentions that questioning of stories or tactics is being labeled as treachery, the totalitarian climate and those who have been accused not being given a chance to respond or defend themselves. (I’m using this translation.)

Once again we arrive back to the question I keep asking. Where are things headed? Is the Deneuve letter an indication that even women are getting sick of the #MeToo movement? (Though Deneuve did offer a partial apology.) Is it just an indication, as I mentioned earlier, of a specifically American phenomenon? Or is it an indication that opinion won’t ever reach a new equilibrium, but that rather things are going to split into two hostile camps, a split which will only deepen and intensify?

I think this is what all of these subjects have in common: reaching a new equilibrium or any form of consensus appears increasingly unlikely the longer things continue. And without that, where does it end? It certainly feels like there has to be a breaking point sooner or later, but what does that breaking point look like? Oliver Wendell Holmes, the great Supreme Court justice once said:

Between two groups that want to make inconsistent kinds of world I see no remedy but force.

My Dad phrased it in a different fashion, he would say that all important issues are eventually decided by the shedding of blood. Is that where things are eventually going to end up? I sure hope not. But from where I stand common ground is vanishing, tensions are increasing, and very few people seem to be interested in sitting down for peace talks.

I wonder if sympathy for me being sick will get anyone to donate. Probably not, but it can’t hurt to toss it out there.

Commentary on Moloch (Now With More Fermi’s Paradox)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

As I mentioned a few weeks ago, in addition to recording my own blog posts and turning them into a podcast, I started doing the same thing with posts from SlateStarCodex, Scott Alexander’s fantastic blog. I would again repeat that if you like SSC and prefer to listen to your content rather than reading it, you should check it out. When I announced the SSC podcast several people requested that I record some of the older SSC posts, the classics if you will, so this week I decided to record his Meditations on Moloch post. And I figured as long as I was doing that, I might as well provide some commentary on it. Because while I think Alexander is largely right on the money, I don’t think he goes far enough, or maybe it’s fairer to say, in my opinion he misses some of the implications.

Of course Alexander’s post is nearly 15,000 words, and my posts are generally around 3,500 words so I don’t know how much of his epic post I’ll be able to cover, but I think there are several ways in which his post ties into themes and subjects I’m interested in, specifically technology, religion, and Fermi’s Paradox, so I’ll be highlighting those connections. But before reading my take on things, I would urge you to read the original post or listen to my recording of it. The name of the post, comes from Allen Ginsberg’s poem Howl, particularly part II, consequently Alexander makes extensive reference to the poem and I’ve used a recording of one of Ginsberg’s recitations of Howl every time Alexander quotes from it. (Which I personally think is super cool.)

For those of you who can’t be bothered, or don’t have the time to read or listen to the original (and as I said it is long) I’ll give a very brief summation of (what I believe to be) Alexander’s point:

The idea of Moloch is the idea of the race to the bottom, and Alexander gives over a dozen examples of it in action, but the one I want to borrow is his analogy of the rats and the island.

Suppose you are one of the first rats introduced onto a pristine island. It is full of yummy plants and you live an idyllic life lounging about, eating, and composing great works of art (you’re one of those rats from The Rats of NIMH).

You live a long life, mate, and have a dozen children. All of them have a dozen children, and so on. In a couple generations, the island has ten thousand rats and has reached its carrying capacity. Now there’s not enough food and space to go around, and a certain percent of each new generation dies in order to keep the population steady at ten thousand.

A certain sect of rats abandons art in order to devote more of their time to scrounging for survival. Each generation, a bit less of this sect dies than members of the mainstream, until after a while, no rat composes any art at all, and any sect of rats who try to bring it back will go extinct within a few generations.

He offers the rat example in the Malthusian Trap section, and in this case it’s a race to the bottom for survival, but you can see a similar race to the bottom with capitalism and profits, democracy and electability, and essentially any system with multiple actors and limited resources. Alexander groups all of these drives to optimize one factor at the expense of others under the heading of Moloch. And makes the, not unjustified claim, that unless we can figure out some way to defeat Moloch that it will eventually mean the end of civilization. And by this he means the end of art, and literature, and science and love. Because just like the rats, in the end those aren’t any of the factors we’re optimizing for.

Presumably very few of my readers will outright disagree with the idea of Moloch, that evolution, and capitalism, and politics are predisposed to engage in a race to the bottom. They will only disagree with how much of a factor it is in this enlightened age, with many granting the existence of Moloch, but convinced that he has already been well and truly beaten. Others, in my opinion, the more realistic, believe that the contest is as yet undecided. And finally there are certainly people who believe that we are destined to lose if we haven’t lost already.

The difference between these groups hinges almost entirely on their views of technology. With the first group feeling that technology has allowed us to progress past things like Malthusian Limits. We may be rats on an island, but technology allows us to turn more of the island into arable land, to build houses on the slopes of the extinct volcano, and best of all, if necessary, to build ships and find new islands.

People in the middle group agree with the benefits of technology, and agree that when the rats first arrived on the island that things were pretty awesome, but they understand that other islands are really hard to get to, and that the current island is not looking so great. Also that while some rats have it pretty good, that there are a lot of rats who already have a sad, miserable life.

In the final group we have people who are convinced that we’ve already wrecked the island we’re on, and that the other islands are horrible inhospitable wastelands. They probably also believe that we’re already on the verge of collapse and we just don’t know it. And, that sure, collapse won’t kill all the rats, but it will leave those that survive envying the dead.

I’m in the middle group, maybe leaning towards the final group, mostly because the majority of rats seem completely unaware of Moloch, and the race to the bottom. But also because I think technology could fail to save use by not being powerful enough, but also by being too powerful. And this is where things like the singularity and in particular artificial intelligence come into play.

On these points Alexander and I are largely in agreement, and at this stage I mostly want to point out how the idea of Moloch ties into the theme of this blog. I claim the summer is past and the harvest is over, but this presupposes that there was a summer and a harvest and also explains why it must eventually end, not because I’m a pessimist, but because there is a implacable force driving things in that direction, Moloch.  Alexander acknowledges the summer and the harvest as well and ties it into Robin Hanson’s idea of Dream Time, the idea that humanity has never believed in more crazy non-adaptive things. In other words they’ve never been less worried about Moloch. Why should this be?

Alexander offers four things that are currently keeping Moloch at bay, and allowing people to engage in a host of activities which don’t have much to do with daily survival.

Excess resources: When the rats first arrive on the island, they don’t need to worry about survival because they have a whole island to themselves. To tie this in to a point I’ve made in the past, it was not that long ago that we discovered the “island” of millions and millions of years worth of stored solar energy, in the form of fossil fuels. And we have kept Moloch at bay through the excess resources of coal, oil and gas, ie cheap and abundant energy, but as I pointed out in a previous post, even if you don’t believe in peak oil, or even if you’re unconcerned by global warming, at some point continued growth runs into hard limits built into the laws of physics.

Physical Limitations: Certain human values, like sleep, have to be preserved because optimizing productivity requires optimizing sleep, or at least not ignoring it. But what if we can use technology to get around physical limitations like the need for sleep? There’s been a lot of worry recently about whether robots or AI will be taking jobs, another thing I have written about in the past. As an example, just today I saw an article in Slate Jack in the Box CEO Says It “Makes Sense” to Consider Replacing Human Cashiers with Robots If Wages Rise. Within the article, there’s the vague hint of disapproval (like the scare quotes around “makes sense”) but if Jack in the Box doesn’t do it, someone else will, and I predict that the time is not that far distant where it won’t take increased wages to make robotic cashiers competitive, they’ll be dramatically cheaper almost regardless of what you pay the human cashiers.

Utility Maximization: This is just Alexander’s way of saying that human values and some of the systems we rely on are currently well aligned. That one of the reasons capitalism and democracy have done a pretty good job, despite being built such that they will eventually descend to the lowest common denominator, is because, for the moment, both are mostly aligned with genuine human values. Customer and employee satisfaction in the case of capitalism and voter satisfaction in the case of democracy. But Alexander points out that this is a temporary alliance and is more likely to be broken by technology than strengthened. (See robots above.)

On this topic Alexander made one comment that really jumped out at me and which I want to pay particular attention to. I don’t know how much this is related to technology, but it is something I’ve been more and more worried about. As I said democracies are almost custom built to engage in a race to the bottom. Partially this is prevented by the need to align themselves with voter satisfaction, and partially this problem is solved, at least in the US, by as Alexander points out:

having multiple levels of government, unbreakable constitutional laws, checks and balances between different branches, and a couple of other hacks.

I entirely agree with this statement, but I also think that the checks put in by the founders to prevent a race to the bottom have been significantly weakened over the last several decades. I talked about DACA/Dreamers a few months ago, and I just saw that a federal judge ordered the Trump administration to partially revive it. We can argue about the morality and ultimate wisdom of DACA all day long, but it’s hard to see where the judiciary ordering the executive to reimplement legislation the legislature failed to pass is anything other than a gross erosion of the checks Alexander mentions.

Returning to the list, the final thing Alexander lists as something which keeps Moloch at bay is coordination. As an example, the rats could all agree to limit the number of children they have. Corporations could all agree to only make people work 40 hours a week (or more likely be forced to do so by the government). And, in theory, under the rule of law, we all agree to abide by certain rules. Alexander repeatedly talks about how things are solvable from a god’s-eye view, but not by individual actors. Coordination allows people access to this god’s-eye view.

In Alexander’s view coordination is the only thing which is a pure weapon in the fight against Moloch. But, as we’re still on the subject of technology, does it increase coordination or does it undermine it? At first glance most people, including Alexander feel that it will increase the ability to coordinate, and certainly there are lots of reasons for thinking this. Better communication being the primary one. But remember the original post was written in 2014. Now that we’re in 2018 and you can look back over the last few years, and the increasingly fractured landscape, are you sure that technology is going to, on net, improve coordination?

Recall that during World War II the Soviet Union was coordinated enough to sacrifice 13.7% of it’s population in a coordinated attempt to defeat Nazi Germany (who were able to sacrifice ~8.5% of their people in a coordinated attempt to do the opposite.) Does anyone feel like we could achieve that level of coordination in modern America? You may argue that that sort of coordination was the bad sort, maybe, the outcome was certainly bad, but does anyone doubt both countries were more coordinated than we are by any measurement? Or you may argue that given there is no Nazi Germany, it’s not necessary for us to possess the coordination of a Soviet Union. This is a good point if the only time we need to coordinate is when we’re in conflict with other nations, but what if we need to coordinate to avoid overfishing, or stem global warming, or to put a colony on Mars?

It occurs to me, that like many things coordination may have a sweet spot, to little and a race to the bottom is unavoidable, to much you create a fractured society of hundreds of ideological groups composed of perfectly coordinated individuals who refuse any level of coordination with other groups.

Without coordination, the problem is that you have millions of individual actors, all maximizing their own welfare at the expensive of the commons. With perfect coordination you have thousands of ideological clusters which end up being more powerful than the individuals they’ve replaced, but no less selfish. Or to put it another way, we’ve moved from the individual to the alt-right and the antifa, but in between those two points we were all just Americans. Which was better. There was, perhaps, less true coordination, but we were more effective (witness our contributions to World War II).

Moving on from technology, the next topic on my list is religion, which is, of course, another very effective method for coordination, but which has lately fallen into disfavor. Alexander doesn’t spend much time specifically discussing religion except to lump it in with the other things that assist in coordination, like traditions, social codes, corporations, governments, etc. But in another post he makes a strong case for religion being the very best of all coordinating institutions. Which ties into a point I frequently bring up: Religion is more important and useful and antifragile than the non-religious (and even some of the religious) realize even in the absence of God. Coordination is just one more example of that.

Alexander brings up another example when he talks about the controversial topic of historical patriarchy and the tradition that women should stay home and bear children. Like most people these days he’s against it, but he brings up the point that it does make a society more resistant to Moloch. (He actually uses the phrase gnon-resistant, but we don’t have the space to get into what gnon is.) And whatever your opinion of it, this is something most religions emphasis, and I offer it up as one more piece of evidence that these beliefs came about because they were useful, not because everyone in the past was a horrible sexist bigot. Whether they continue to be useful is a different topic.

So these are a few of the benefits of religion even in the absence of God, but what if we bring God back into the picture? What does a discussion of Moloch say about whether God exists? In the original post, Alexander repeatedly talks about terrible problems, which disappear in the presence of a “god’s-eye-view”. It’s a phrase he uses repeatedly (17 times, actually.) Thus the obvious solution for Alexander is to build a god that can “kill Moloch dead.” In Alexander’s estimation our best chance at creating this god, is to build a friendly, superintelligent AI. This gives us a god which will not only allow us to engage in perfect coordination, but endow us with infinite resources, remove all physical limitations, and create systems where incentives are perfectly aligned. That is certainly one plan.

Another plan is to hope such a God already exists. (Or to exercise something very similar to hope, faith.) In other words, Alexander’s plan is to hope that we can create a friendly god. Those who are religious have faith that such a God already exists. Is one strategy really that superior to the other. And is there any reason why you wouldn’t hope for both?

If someone is going to place their faith in Alexander’s plan it’s reasonable to ask how likely it is. Well, let’s take a moment to discuss it. Alexander’s god doesn’t get us away from maximization, just as the rats have to maximize for survival, and capitalists have to maximize for profit, the superintelligence will end up having to maximize for something as well. If we want to know how much hope we should have, we need to know what it’s likely to maximize. The cautionary example that most people are familiar with is the paperclip maximizer, which stands in for all sorts of potential maximization. In this specific example the AI just happens to be programmed to produce paper clips, and ends up turning all available matter into paper clips. Of course, the example doesn’t have to be as silly as paper clips. Even if we tell the AI to optimize for intelligence that could still entail turning all available matter into computers, which is less ridiculous than paper clips, but no less deadly for us. Or we could tell it to optimize our happiness, which may just result in the AI plugging an electrode into our pleasure center. A process AI researchers call wire-heading. Viola! Maximum happiness, and who cares if you’re indistinguishable from a heroin addict. This is just a small taste of the difficulties we face in implementing Alexander’s plan, difficulties which have had whole books written about them. (For example Bostrom’s Superintelligence which Alexander references frequently in the original post.)

As an aside, you may at this point be wondering, if everything has to maximize for something what does religion say that God maximizes for? Well I only feel competent to speak about Mormonism, but on that count we know the answer. It comes in Moses 1:39:

For behold, this is my work and my glory—to bring to pass the immortality and eternal life of man.

This sounds like it would align pretty well with human values.

To return to Alexander’s plan, as you might imagine there are a lot of challenges when you decide to build a god. But what about the other plan: religious faith?

Many people will complain that there’s very little evidence for God (to be fair that may be why they call it faith…) and that there are several problems, not the least of which is the problem of evil and suffering. In a marvelous piece of serendipity I believe research into artificial intelligence has given us a great answer to those problems (an answer Alexander himself expressed some admiration for.) But I have yet to talk about Fermi’s Paradox, and I think within the original post we have yet another reason to believe there might be a God.

As Alexander describes Moloch there seems to be no reason why he hasn’t swallowed the universe. In fact Alexander says:

This is the ultimate trap, the trap that catches the universe. Everything except the one thing being maximized is destroyed utterly in pursuit of the single goal, including all the silly human values.

Oh, wait, that’s not Moloch, that’s what happens if we get Alexander’s AI god wrong. But I suppose that’s one form Moloch could take. But he also says when speaking of walled gardens:

Do you really think your walled garden will be able to ride this out?

Hint: is it part of the cosmos?

Yeah, you’re kind of screwed.

But if this is true, if Moloch will eat everything in its path, why hasn’t it already? Why haven’t we been destroyed by the Berserkers of Fred Saberhagen? Turned into computronium by alien AIs? Been enslaved by Kang and Kodos? If your argument is that interstellar travel is hard than why haven’t we seen evidence of the Moloch-style process of creating a Dyson sphere? Or given the prominent place memes have in Alexander’s post, why haven’t any infectious messages been broadcast at us?

As usual, it’s always possible that we are entirely alone. Which I think leads to the worrying conclusion that Moloch is a strictly human creation… It’s also possible that the “Gardener over the entire universe” Alexander says we need, already exists, and our task is to figure out what he wants from us.

Finally, I think Alexander and I both agree that the harvest is past and the summer is ended, and that the path to salvation is a narrow one. He pins his hopes on transhumanism and a friendly AI. I’m pinning my hopes on the existence of God. If you think I’m silly, that’s fine. If you think Alexander is silly, that’s fine. If you think both of us are silly, well then, are you sure you’re not worshipping Moloch without even realizing it?

If you don’t think I’m silly, or if you do, but you find it amusing, consider donating.

Predictions Revisited for 2018

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

It’s that time of the year when some people resolve to do better in the coming 12 months than they did in the last 12. Other people use the occasion of the New Year to review the year just passed by releasing best of (and worst of) lists. Still others might take the opportunity to honor those who have died in 2017 or to bemoan all the bad things which happened (most of which appear to involve Trump.)

If you’re of a slightly more analytical bent, you might make predictions for the coming year. And if you’re trying to be scientific, this is also the time when you review how well you did on last year’s predictions. I plan to use this post to do both, but as long time readers might know, my system for prediction is somewhat different than most peoples. The predictions I made last year were permanent predictions. Or at least I was predicting as far out as I thought I had any chance of seeing clearly, which means my predictions aren’t quite permanent, I actually cut things off at 100 years.

I should also mention that last year’s prediction post ended up being one of my favorites, and not because of the predictions, but because it was one of the posts I returned to over and over again when I needed to explain my view of the world. This mostly happened during in-person discussions, but you may have noticed that it made an appearance not that long ago when I was discussing the difference between Bayesian Rationality and Talebian antifragility.

The theme of the previous post that I kept returning to is important enough that it deserves to be repeated here, though I won’t go into quite as much detail as I did in the previous posts.

To start with, recently there has been a big push to make people more accountable for their predictions, to do a better job of checking the accuracy of the predictions after the fact and finally for predictors to apply confidence levels to the predictions they’ve made. The biggest advocate for this is Philip Tetlock of the Good Judgement Project (and also the author of Superforecasting.) In my previous posts, I pointed out that while this effort was laudable that it has a tendency to miss Black Swans. Let me give you an example of what I mean, and instead of using an example of a negative Black Swan, as I so often do, this time I’ll use an example of a positive Black Swan.

The biggest financial story of the year has been the explosion of value in cryptocurrencies. But, for most people, more than its importance as a story was the fact that you could have made an enormous amount of money by investing in something like Ethereum, which went up a staggering 10,000% in 2018. How did the Good Judgement project do on predicting this? As far as I can tell they didn’t even try. I couldn’t find any cryptocurrency related predictions on their site for 2017. (If you do find any please point them out to me.) This is not to say that no one made any cryptocurrency predictions. Scott Alexander of SlateStarCodex uses a similar prediction methodology, and he did make a cryptocurrency prediction. He predicted, with 60% confidence, that bitcoin would end the year higher than $1000. At the time he made the prediction bitcoin was below $1000, but just barely, so essentially what he was saying is that he was 60% sure that you wouldn’t lose money on bitcoin. Not exactly a ringing endorsement, and definitely not something that gave any hint as to what was actually going to happen in 2018. Which is not to fault Alexander in any way. But if you were using this sort of prediction as a guide for how 2017 would look, there were some glaring omissions.

On the other hand you had people like the Winklevoss Twins, who were saying back in 2013 that BitCoin was going to explode, and who spent from December 3rd of 2013, through February of 2017 looking kind of foolish as bitcoin value was basically flat. But now, they’re the first bitcoin billionaires. They would have done terribly as entrants in the Good Judgement Project in 2014, 2015 and 2016, but in the end, even if their timing was off, they made the right bet.

The point of all this is that most of the things which really change the world (like bitcoin or the Election of Trump) have a really low probability of happening. And in fact part of the reason why they’re so impactful is because of this low probability. The impact is greater because no one is prepared for it. But if, like the Good Judgement people, your primary goal is accuracy, then you’re not even going to make predictions about these events. Or if you are they’re going to be very conservative. Like Scott Alexander’s 60% confidence prediction that bitcoin would end the year above $1000.

I said I would keep this short, and the idea of exposing yourself to positive black swans (like bitcoin) and protecting yourself from negative black swans has been covered in many of my posts. But as it relates to my prediction system vs. the Good Judgement system, the point I want to get at is that, despite what people might think, the best way to deal with an uncertain future is not to maximize the accuracy of your predictions. It’s to have a plan for even those things that appear really rare, if the impact of those rare events is great enough. Not because they’re likely but because the consequences are so great. In other words, sometimes it only takes being wrong once, to undo all the times you were right. And sometimes it only takes being right once to undo all the times you were wrong. If the future proceeds how everyone expects, than having those same expectations is not going to get you very much, nor is much intelligence required. It’s when things happen that very few people expected that wisdom and preparation become important.

With that introduction out of the way, let’s see how my permanent predictions have held up over the previous year. So far I’m not wrong about anything, which is good, it would be pretty embarrassing to make a permanent prediction and be wrong less than a year later. Though I will admit that the winds of 2017 have not always blown in my favor.

Artificial Intelligence

1- General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.

This is one of the areas where there was a lot of exciting developments during 2017. The biggest being when AlphaGo beat the world number 1 Go Player in May. There was further excitement in December when the same engine, now called AlphaZero, apparently mastered Chess as well. Though there are a lot of quibbles with this result.

Last year I linked to a list and said that if a single AI could do everything on the list my prediction would have failed. Reviewing the list now, I think there are a few items where we’re really close, but in almost all cases every item needs a specialized system. That’s why AlphaZero making the transition from Go to Chess was such big news, but even that is still a long way away from any kind of general artificial intelligence.

2- A completely functional reconstruction of the brain will turn out to be impossible.

I haven’t come across anything one way or the other on this one, though it remains a big area of interest for scientists. But to the best of my knowledge, there were no major breakthroughs in 2017. I know Robin Hanson is a big advocate for this technology, so I promise that by next year I will at least read his book Age of Em.

3- Artificial consciousness will never be created.

I’ve seen no evidence that 2017 brought us any closer to understanding consciousness, let alone implementing it artificially. I would argue that we’re really far away from this.

One of the points I’d like to emphasis, as we review these predictions and sets of predictions, is to continually ask, “But what if Jeremiah is wrong?” The AI section is particularly interesting in this regard because it can go either way. Many people think that if we do manage to develop superintelligent AI that it will be the end of all our problems. Other people think the exact opposite, they agree with the first group that a superintelligent AI would be the end all our problems, but only because it will end all of us. On the positive side if I’m wrong, then the consequences are almost entirely beneficial, and other than the hit to my ego, it will matter not at all. On the negative side if I’m wrong then there are grave consequences, which is why, despite arguing it won’t happen, I am nevertheless completely in favor of efforts to ensure friendly AI, and have even donated money to that end.

I could see someone arguing that the antifragile thing to do is assume that we might create a malevolent AI, and that under the precautionary principle we should do everything in our power to stop AI research. I would also be fine with this, though I don’t think, given the way technology works, that this ban would be very effective. Also I understand that by arguing it’s impossible I might weaken attempts to control AI, (oh, that I were that influential) making the negative Black Swan more likely, but in this case I think the bigger danger is putting all of our eggs into the salvation through AI basket, and neglecting other, more worthwhile endeavors.


1- Immortality will never be achieved.

In the AI section most of the news was not good for me and my predictions. In this section the news out of 2017 is more mixed, especially on this topic. You may have heard that life expectancy in the US has actually gone down, two years in a row. One theory for achieving immortality is that we’ll soon reach a point where life expectancy is increasing by more than one year every year, which would promise a form of immortality, but if we can’t even keep the life expectancy gains we already have, then this gradual method of immortality appears unlikely. Which is not to say there couldn’t be some miraculous breakthrough that brings immortality in one fell swoop, but 2017 brought no sign of that either.

2- We will never be able to upload our consciousness into a computer.

This is closely related to the last section, and most of what I said there applies here. Needless to say by any reasonable estimate we’re a long way away on this.

3- No one will ever successfully be returned from the dead using cryonics.

Another area where the technology is mostly still non-existent, meaning progress is difficult to measure, but one way to get an approximate sense of progress would be to look at the number of people who are in cryonic suspension. And when one of the leading cryonics organizations only added three people in 2017 (down from 6 in 2016 and 10 in 2015) then I have to assume that no significant breakthroughs have been made over the last year.

To be honest this surprises me a little bit. If I was a transhumanist I think I’d definitely sign up for cryonics, but very few people have, and it’s not that people have signed up but just not died yet, the two largest organizations have a total of only 3,410 members of all types (this includes people who’ve submitted DNA, and pets!)

To turn once again to the question, but what if I’m wrong? Well, then we get immortal individuals, living their life in a virtual paradise. And this paradise is available to anyone who coughs up $28,000 for cryonic suspension. (That seems cheap. You can see why I’m surprised more atheists/transhumanists/adventurous souls haven’t taken advantage of it.)

But if I’m right, then we have a bleaker future. We’re going to have to either get our act together, or hope that there’s a higher power out there. If we prepare for a future where we will never escape our problems by uploading our consciousness, and it turns out we’re wrong, then we’ve lost nothing, but if we assume that we can escape in this fashion, and behave accordingly, and then it doesn’t happen we could be in a lot of trouble. And I realize that this caution does not apply to very many people, but I expect that number will go up over time.

Outer Space

1- We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.

Elon Musk continues to push ahead and in 2017 he released an updated version of his Mars plan. Apparently the big thing is that he’s made the rockets and the entire program more efficient. (There’s a good breakdown here.) It continues to be insanely ambitious, and while I hesitate to make short term predictions I predict he won’t hit his goal of landing people on Mars in 2024.

Outside of Musk, there is still NASA, and they do currently have a plan to make it to Mars sometime in the 2030’s. Though my impression is that NASA has had a vague plan to go to Mars for as long as I’ve been alive, and it was always a decade or two in the future, just like now. In other news, Trump just barely signed a directive to go back to the Moon, but as of yet there are no details.

Thus, while we’re still a long way away from having any extraterrestrial colony of any size, if someone wanted to argue that we moved marginally closer in 2017, I wouldn’t fight them on it.

2- We will never establish a viable human colony outside the solar system.

Obviously the path to this runs through Mars, so everything I said above applies here, just multiply the difficulty by several orders of magnitude. Also if you want a very in depth look at this I would once again recommend reading Aurora by Kim Stanley Robinson, I know it was published in 2015, but I think it gives a pretty upto date view on the difficulties.

3- We will never make contact with an intelligent extraterrestrial species.

In some respects this is potentially the easiest of the three predictions for me to be wrong about. It’s possible that aliens will do all the work. And we did see the first (confirmed) object from outside the solar system in 2017, which, interestingly, was shaped very strangely, and therefore may have been artificial. I guess that could be counted as progress towards proving me wrong, but on the other hand, I think that every year that goes by without any contact adds to the probability that there will never be any contact.

If you were paying attention I re-ordered 1 and 2 from their original positions, since there’s no reasonable scenario under which extrasolar colonies aren’t strictly dependent on extraterrestrial colonies. Beyond that, we once again ask, but what if I’m wrong? And, once again, I think there is very little downside if I’m wrong, at least on the first two. (Being wrong about the third prediction creates a future I don’t think any of us can imagine.) And as with many of these predictions I hope I am wrong, but when it comes to leaving the Earth things are a little bit different.

I think there are a significant number of people, especially among the more educated, who assume, on some level, that however bad it gets here, we will eventually be able to leave the Earth behind. And an even greater number of people who assume that progress in general will fix the problems eventually, even if they haven’t thought things all the way through to the necessity of eventually leaving the Earth. And the point I’ve made in several posts is that if leaving Earth is the eye of the needle, if that is going to eventually be the only way forward, then we probably need to take it a lot more seriously, because it’s going to be significantly more difficult than most people think. And as I pointed out in the last post, it’s possible, even likely that taking it seriously enough to make it happen will involve some trade-offs.

War (I hope I’m wrong about all of these)

1- Two or more nukes will be exploded in anger within 30 days of one another.

Just the excitement over North Korea during 2017 has unfortunately brought this prediction much closer to being fulfilled.

2- There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)

Another thing I desperately hope is not true. And while the news on the first point is bad. I think that here I haven’t seen much sign that any of the major powers is interested in all out war. Which is good.

3- The number of nations with nuclear weapons will never be less than it is right now.

When introducing this prediction last year I mentioned that the current number is nine: US, Russia, Britain, France, China, North Korea, India, Pakistan and Israel. I haven’t seen any hint that any of the current countries are about to give up their weapons, but I have seen talk about the idea of Japan and South Korea possibly acquiring them in response to North Korea. I think it would take a lot for Japan to decide to develop nukes, but the difficulty would be entirely emotional and psychological, the technical side of it would be easy for them.

More than any other section this is the one where I hope I’m wrong. I even include it in the title. I very much hope Steven Pinker is right and that the world is getting less and less violent. And I am aware of the evidence for this. But, not to beat a dead horse, it bears repeating that the disappointment which comes from expecting a war and not having one, is far better than the disappointment of not expecting a war and getting one.


1-There will be a natural disaster somewhere in the world that kills at least a million people

Given the random nature of disasters I don’t think 2017 did much to change the probabilities here, though for those who believe that global warming will cause more hurricanes, and greater climate instability, (I think the jury is still out) every year we don’t do something significant about carbon emissions brings us closer to that sort of climate disaster.

2-The US government’s debt will eventually be the source of a gigantic global meltdown.

It wasn’t that long ago that I covered this at some length so I won’t add much except to say that the deficit was $666 billion in 2017 (I’ll try not to read too much into that number.) And the debt grew by only $600 billion, which was slower than the Obama era average of over a trillion.

3- Five or more of the current OECD countries will cease to exist in their current form.

2017 brought some excitement here with the Catalonian independence vote. And of course there was talk of a Calexit after the election of Trump. Meaning this is another prediction I think is looking pretty good after 2017.

What all of these predictions have in common is that they’re Black Swans, improbable events that would have a major impact. Most of the items on the list are things people are insufficiently protected against, while others (primary the Outer Space section) are things we’re doing a bad job of capitalizing on. Looking forward, I don’t think any of my predictions are going to be proven true or false in 2018, and probably not in 2019, but I do believe that sometime between now and January 1st, 2117, all of them will end up being accurate, and it doesn’t matter if it only happens that one year, if humanity has made the wrong call, in most cases that one time is all it will take.

One final prediction, I’ll continue to try to come up with vaguely humorous appeals for donations and that most of them will largely fail. Despite that consider donating anyway.