Month: <span>August 2017</span>

A Potpourri of Pedantry

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

This week I thought that rather than spending the entire space on one subject that I’d cover a bunch of smaller subjects I’ve been thinking about recently. I’m going to be all over the place, but hopefully within my ramblings you’ll find something interesting, frightening or appalling.

Subject 1- Electromagnetic Pulses (EMP)

Last month The Economist did a “The World If” section where they predict what the world would look like if various things happened. They had an article which examined the US if Trump were to win a second term. Another one which speculated about the effects of widespread blockchain adoption, and yet another one which imagined the Middle East if the Ottoman Empire had not collapsed. But the one I want to talk about describes the potential effects of a sufficiently strong electromagnetic pulse on the power grid of the United States. The EMP in question could come either from the Sun in the form of coronal mass ejection or from a nuclear weapon exploded high (44 miles or so) above the country. Whichever way the EMP comes it would be devastating to any electronics, including *cue ominous music* the power grid.

I was aware of the potential devastation which could be caused by a strong electromagnetic pulse, but the article provided some very interesting specifics I hadn’t encountered before. I knew an EMP would blow out transformers, I did not know that:

America runs on roughly 2,500 large transformers, most with unique designs. But only 500 or so can be built per year around the world. It typically takes a year or more to receive an ordered transformer, and that is when cranes work and lorries and locomotives can be fuelled up. Some transformers exceed 400 tonnes.

As you can probably infer from those statistics, an EMP which knocked out some significant portion of our transformers would knock out electricity across the US for a very long time. This isn’t a disaster where a month later everything’s back to normal, this is a disaster where, “Martial law ends six months after the original energy surge.” At which point “roughly 350,000 Canadians and 7m Americans have died.”

Their description of those six months is pretty chilling (I had never thought about the people who would end up trapped in the elevators of skyscrapers) and I recommend you read the initial article, which goes on to talk about threats to the power grid beyond just EMPs. Another thing I learned was that:

Shooting up transformers at just nine critical substations could bring down America’s grid for months, according to an analysis performed in 2013 by the Department of Energy’s Federal Energy Regulatory Commission

I understand that nine is more than one, so we don’t literally have a single point of failure, but the number is still much smaller than I would have expected. The article goes on to mention that the location of these substations is secret, which is something I suppose, but it’s hardly makes up for the underlying fragility of the system as a whole, which is just waiting for a shock to break it. A shock that could come any day now and which, the article makes clear, we are woefully unprepared for.

Subject 2- An example of the difference between fragility and volatility

As long as we’re on the subject of modernity and fragility, I was reading Rationality: From AI to Zombies by Eliezer Yudkowsky (who’s appeared a lot in this space recently), when I came across the following assertion:

When dams and levees are built they reduce the frequency of floods, and thus apparently create a false sense of security, leading to reduced precautions. While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases.

This is a fantastic example of the difference between volatility and fragility and of the great weakness of the modern world. We have the technology to create great big barriers against risk, whether it’s building a dam, or bailing out banks, or the barrier to aggression provided by nuclear weapons. All of this reduces the volatility, and we go from having floods every 10-20 years to having no floods, at least within living memory. But, that’s all based on the dam or the government, or the nuclear deterrent never failing. If it does then the flood or the financial crisis or the war is many, many times worse, leading, as he said, to greater total damage, even if we average it across all the years.

The problem, as the book went on to say, is that rather than extrapolating the possibility of large hazards from our memory of small hazards:

Instead, past experience of small hazards seems to set a perceived upper bound on risk. A society well-protected against minor hazards takes no action against major risks, building on flood plains once the regular minor floods are eliminated. A society subject to regular minor hazards treats those minor hazards as an upper bound on the size of the risks…

Technology creates fragility. Without a power grid and reliance on electricity an EMP doesn’t inflict any damage, let alone the deaths of seven million people. In the same way, without dams you can’t have a gigantic, completely unprecedented flood, nor do people decide to build on floodplains, and when catastrophe does come I think we’ll find that a lot of the modern world is built on just such a metaphorical floodplain.

Subject 3- Overcomplicated

Recently the author of Overcomplicated, Samuel Arbesman, contacted me and said I might like his book. It did sound like something that would be interesting, and it was available as an audiobook. Which makes any given book significantly easier to “read”, so I picked it up.

I confess it was not as pessimistic as I would have liked, but that’s not necessarily a bad thing for the rest of you. I assume I’m on the tail end of the pessimism distribution and most people would actually prefer a little less pessimism. Despite this lack of pessimism, he did talk about some very interesting concepts.

One of the concepts Arbesman talks about is the idea that we have passed from the Enlightenment to the Entanglement. This idea was originally proposed by computer scientist, Danny Hillis. And has the distinction, which it shares with most great labels, of immediately suggesting most of the implications through the word itself, without having to know anything about the specifics. In other words, upon hearing the term you grasp the obviousness of the description at once.

But what are the specifics of The Entanglement? To begin with it contains something of a paradox. The more interconnected things get, the more specialized knowledge becomes. This is easy to see in computers where one 100,000 person company makes just the cpu and another 10,000 person company makes just the graphics cards, and another 60,000 person company puts it all together. But you can also see it in corporations, where the legal department may know very little about how to make widgets, but a lot about how to trademark them, and the financial department doesn’t care about making widgets or trademarking them, but is very aware of the tax implications having a Panamanian widget factory. In theory the CEO might know a little bit about everything, but I’m sure even then there will be large blank spots (probably centered in the IT Department).

Another trend specific to the entanglement is the need, as Arbesman points out, to move from physics thinking and biological thinking. Physics thinking is straightforward and reductionist. A certain force applied to a given mass, creates a very well defined acceleration. Biology is more messy, more entangled, and much harder to understand. Because of these difficulties people prefer the physics model. They want to assume that money will cure poverty, that increasing the amount of serotonin will cure depression and that a given input into a computer should always produce the same output. But when it comes to biological systems that’s not the case, and Arbesman makes the case that increasingly the biological model of technology is a better fit than the physics model.

On this point I couldn’t agree more, and my big disappointment was that despite mentioning fragility on many occasions he didn’t ever mention Taleb and his work in that area. Also as I mentioned I don’t think he was nearly pessimistic enough about the problems that will arise from the Entanglement, especially those stemming from the increasing narrowness of our understanding.

Subject 4- The Indo Europeans

Speaking of books I’ve been reading, I’ve also been making my way through In Search of the Indo-Europeans by J.P. Mallory. I don’t think I’d recommend it, it’s very dry and obviously geared towards an academic audience, but the underlying subject is fascinating. Enough so that I’m sure that it could be the basis of an entire post.

I decided against this, mostly because of my vague feeling that I was already wandering into one of those areas where there’s a lot of baggage. There was a sense that if I turned over too many rocks I’d end up in a discussion of Aryans and Nazis and the Master Race, which is something I’d like to avoid. But, there are many interesting bits before falling into that pit, and here, in no particular order, are a few things I thought were worth chewing on:

  1. The book was primarily concerned with locating the home-land of the original Indo-Europeans (called Proto-Indo-Europeans in the book). Mallory’s best guess is the Pontic-Caspian region. Which is the steppe area lying to the north of the Black and Caspian Seas. I say his best guess, but there is obviously a whole group of scholars who support this hypothesis (and, presumably, a whole group who don’t).
  2. There’s ample evidence that this was also the area where horses were first domesticated. And while the book doesn’t go into it all that much, this is very probably why the Indo-Europeans were the Indo-Europeans, and not some nameless tribe forgotten by history, not even to be remembered by archeologists.
  3. If domesticating the horse represented the technological edge of the Indo-Europeans what did they do with those horses? To me the answer of, “conquering everything in sight” (think Huns and Mongols) seems obvious, but some people assume something less bloody. And to be fair the time frame of the expansion does spread across thousands of years.

Of course, when you’re talking about something which largely happened before recorded history, there’s very little you can say that isn’t speculative to one degree or another. But from my perspective it seems safe to say that there was some large empire. Which in, I’m sure, lots of different forms, by 2000 BC dominated the middle of Eurasia in a strip from the western border of Germany to the eastern border of Kazakhstan. Was this one empire or had it already broken into several? It’s hard to say, but either way it’s domination was so total that all original languages were lost.

By 500 BC it was obviously not a single empire, but by that point the Indo-Europeans had spread all the way from India to Ireland. The speakers of the language continued to spread until today outside of the Far East and Arabic speaking countries, essentially everyone has, as one of their official languages one language of Indo-European descent.

You can see where the next obvious question would be what makes this culture and people so unique, before you know it you’re at the edge of the pit staring down at the theory of a master race…

Subject 5- A Plethora of Hate

I guess as long as I’m alluding to the subject of Nazis, it might be time to talk about current events. I actually intend to write a longer post on this subject later, but for now I wanted to draw your attention to an insight John Michael Greer had recently, in a post called Hate is the New Sex.

If, like me, you were confused by the title, Greer’s post is making a comparison between

…the attitude, prevalent in the English-speaking world from the middle of the nineteenth century to the middle of the twentieth, that sex was the root of all evil.

And the current, ubiquitous, belief that hate is the root of all evil. I hope, that this belief, and its omnipresence is obvious enough to not require me to offer any further proof. But if I do, reflect on the following points Greer makes in his post:

If you want to slap the worst imaginable label on an organization, you call it a hate group. If you want to push a category of discourse straight into the realm of the utterly unacceptable, you call it hate speech. If you’re speaking in public and you want to be sure that everyone in the crowd will beam approval at you, all you have to do is denounce hate.

Greer goes on to clarify (as do I) that this does not mean that he is saying that hate is good. Hopefully we’re all wise enough to recognize that there is plenty of territory available between condemning something as absolutely bad and praising it as absolutely good.

Rather, his key point, as he lays out in the title, is to draw a comparison between the fight against immorality a century or so ago and the fight against hate now. While this comparison isn’t perfect, it at least makes some interesting predictions.

His first prediction is that there will be an enormous amount of hypocrisy and evasion, particularly along class lines. During the VIctorian Era, for example, the upper class didn’t eliminate all sex, they just railed against the kind of sex the lower classes were engaged in. And today when the two sides line up across from each other you don’t see one side that has banished all hate, while the other is full of hate, you see two sides, both full of hate, with different definitions of which sort of hate is acceptable.

Second, he predicts that “hate” will become more appealing as it becomes more taboo, and that just as the sexual revolution followed the prudery of the early 20th century, we should consider the possibility that something resembling a hate revolution will follow the current crackdown.

I have seen many people mention the hypocrisy, but I don’t see many people talking about hate becoming increasingly appealing as it becomes increasingly taboo, but I think if you look closely you’ll see that this is exactly what’s taking place.

For my own part I think I have two things to add to the discussion. First, one thing, that Greer doesn’t cover (though I doubt he disagrees with it) is the increased speed at which everything happens now vs. 100 years ago. Which is to say if hate does take a similar arc to sex I don’t think it’s going to take 100 years to play out, which means, if there is a “hate revolution” it may happen a lot sooner than we would like.

My other addition to the discussion is to examine the role of Christianity in this process. Even if someone knows nothing else about Jesus, they’ll be able to tell you that he was strongly opposed to hate in all it’s forms. I’m sure this isn’t far from the mark, but it’s interesting that when you actually look at the Gospels that by my, admittedly quick, calculations at least half of the references to hate are in reference to being hated as a sign that you’re his disciple. Something to think about…

Subject 6- Checking in on Fermi’s Paradox

I keep a pretty close eye on whatever is currently being said about the paradox (Google Alerts if you must know.) And every few months another explanation for the paradox will make the rounds.

The latest one stems from the assumption, which many people share, that eventually life will go “post-biological” as they say. Or to put it another way that eventually robots and AIs will inherit the earth. Since computers run better when it’s cold, the explanation for the paradox is that we haven’t encountered aliens because they’re aestivating. Aestivating is the inverse of hibernation, instead of going dormant while it’s cold you go dormant while it’s hot. And these aliens have gone dormant waiting for the universe to get colder, because then they’ll work better.

This isn’t the worst explanation I’ve heard for the paradox, but neither do I find it to be especially compelling. That said, it does have the benefit of being a genuinely new idea. Unlike the idea that advanced civilizations may destroy themselves, which also made the rounds again recently. An idea which has been around since the lunch table where Fermi first made his famous utterance and probably long before then.

One problem I see with the aestivation hypothesis (and I haven’t read the paper, I don’t think it’s available for free) is that every civilization is going to be under evolutionary pressure from other civilizations and in this case the artificial civilizations are competing with biological civilizations, who like the heat. So either they have to be positive that all biological civilizations are going to evolve into artificial civilizations AND be friendly. Or they should be wiping out those biological civilizations before they get too advanced. The last thing they want is a biological civilization to out compete them while they’re dormant.

In other words this explanation appears to ignore the Dark Forest hypothesis, that the universe is red in tooth and claw. Under the aestivation hypothesis not only would the game have to be non-violent, but it would have to, additionally, allow you to sit on the sidelines, not actually playing and still suffer no ill effect.

In the final analysis I think a lot of proposed explanations are like this. They address one part of the paradox, and in the process they resolve that part, but end up completely neglecting all of the other factors involved in the paradox.

That looks like all I have space for. Let me know if you enjoy this format, and if so, I’ll do it more often. Also I apologize for missing last week, I went on a long road trip and I expected to have time to write on the drive, but, that ended up being way too optimistic.

It’s not many people who can cover such a broad array of topics. And even fewer who can be wrong about all of them. If you appreciate that sort of blind confidence consider donating.

Remind Me What The Heck Your Point is Again?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

The other day I was talking to my brother and he said, “How would you describe your blog in a couple of sentences?”

It probably says something about my professionalism (or lack thereof) that I didn’t have some response ready to spit out. An elevator pitch, if you will. Instead I told him, “That’s a tough one.” Much of this difficulty comes because, if I were being 100% honest, the fairest description of my blog would boil down to: I write about fringe ideas I happen to find interesting. Of course, this description is not going to get me many readers, particularly if they have no idea whether there’s any overlap between what I find interesting and what they find interesting.

I didn’t say this to my brother, mostly because I didn’t think of it at the time. Instead, after few seconds, I told him, well of course the blog does have a theme, it’s right there in the title, but I admitted that it might be more atmospheric than explanatory. Though I think we can fix that with the addition of a few words. Which is how Jeremiah 8:20 shows up on my business cards. (Yeah, that’s the kind of stuff your donations get spent on, FYI.) With those few words added it reads:

The harvest [of technology] is past, the summer [of progress] is ended, and we are not saved.

If I was going to be really pedantic, I might modify it, and hedge, so it read as follows:

Harvesting technology is getting more complex, the summer where progress was easy is over, and I think we should prepare for the possibility that we won’t be saved.

If I was going to be more literary and try to pull in some George R.R. Martin fans I might phrase it:

What we harvest no longer feeds us, and winter is coming.

But once again, you would be forgiven if, after all this, you’re still unclear on what this blog is about (other than weird things I find interesting). To be fair, to myself, I did explain all of this in the very first post, and re-reading it recently, I think it held up fairly well. But it could be better, and this assumes that people have even read my very first post, which is unlikely since at the time my readership was at its nadir, and despite my complete neglect of anything resembling marketing, since then, it has grown, and presumably at least some of those people have not read the entire archive.

Accordingly, I thought I’d take another shot at it. To start, one concept which runs through much (though probably not all) of what I write, is the principle of antifragility, as introduced by Nassim Nicholas Taleb in his book of (nearly) the same name.

I already dedicated an entire post to explaining the ideas of Taleb, so I’m not going to repeat that here. But, in brief, Taleb starts with what should be an uncontroversial idea, that the world is random. He then moves on to point out the effects of that, particularly in light of the fact that most people don’t recognize how random things truly are. They are often Fooled by Randomness (the title of his first book) into thinking that there’s patterns and stability when there aren’t. From there he moves on to talk about extreme randomness through introducing the idea of a Black Swan (the name of his second book) which is something that:

  1. Lies outside the realm of regular expectations
  2. Has an extreme impact
  3. People go to great lengths afterwards to show how it should have been expected.

It’s important at this point to clarify that not all black swans are negative. And technology has generally had the effect of increasing the number of black swans of both the positive (internet) and negative (financial crash) sort. In my very first post I said that we were in a race between these two kinds of black swans, though rather than calling them positive or negative black swans I called them singularities and catastrophes. And tying it back into the theme of the blog a singularity is when technology saves us, and a catastrophe is when it doesn’t.

If we’re living in a random world, with no way to tell whether we’re either going to be saved by technology or doomed by it, then what should we do? This is where Taleb ties it all together under the principle of antifragility, and as I mentioned it’s one of the major themes of this blog. Enough so that another short description of the blog might be:

Antifragility from a Mormon perspective.

But I still haven’t explained antifragility, to say nothing of antifragility from a Mormon perspective, so perhaps I should do that first. In short, things that are fragile are harmed by chaos and things that are antifragile are helped by chaos. I would argue that it’s preferable to be antifragile all of the time, but it is particularly important when things get chaotic. Which leads to two questions: How fragile is society? And how chaotic are things likely to get? I have repeatedly argued that society is very fragile and that things are likely to get significantly more chaotic. And further, that technology increases both of these qualities

Earlier, I provided a pedantic version of the theme, changing (among other things) the clause “we are not saved” to the clause “we should prepare for the possibility that we won’t be saved.” As I said, Taleb starts with the idea that the world is random, or in other words unpredictable, with negative and positive black swans happening unexpectedly. Being antifragile entails reducing your exposure to negative black swans while increasing your exposure to positive black swans. In other words being prepared for the possibility that technology won’t save us.

To be fair, it’s certainly possible that technology will save us. And I wouldn’t put up too much of a fight if you argued it was the most likely outcome. But I take serious issue with anyone who wants to claim that there isn’t a significant chance of catastrophe. To be antifragile, consists of realizing that the cost of being wrong if you assume a catastrophe and there isn’t one, is much less than if you assume no catastrophe and there is one.

It should also be pointed out that most of the time antifragility is relative. To give an example, if I’m a prepper and the North Koreans set off an EMP over the US which knocks out all the power for months. I may go from being a lower class schlub to being the richest person in town. In other words chaos helped me, but only because I reduced my exposure to that particular negative black swan, and most of my neighbors didn’t.

Having explained antifragility (refer back to the previous post if things are still unclear) what does Mormonism bring to the discussion? I would offer that it brings a lot.

First, Mormonism spends quite a bit of time stressing the importance of antifragility, though they call it self reliance, and emphasis things like staying out of debt, having a plan for emergency preparedness, and maintaining a multi-year supply of food. This aspect is not one I spend a lot of time on, but it is definitely an example of Mormon antifragility.

Second, Mormons, while not as apocalyptic as some religions nevertheless reference the nearness of the end right in their name. We’re not merely Saints, we are the “Latter-Day Saints”. While it is true that some members are more apocalyptic than others, regardless of their belief level I don’t think many would dismiss the idea of some kind of Armageddon outright. Given that, if you’re trying to pick a winner in the race between catastrophe and singularity or more broadly, negative or positive black swans, belonging to religion which claims we’re in the last days could help break that tie. Also as I mentioned it’s probably wisest to err on the side of catastrophe anyway.

Third, I believe Mormon Doctrine provides unique insight into some of the cutting edge futuristic issues of the day. Over the last three posts I laid out what those insights are with respect to AI, but in other posts I’ve talked about how the LDS doctrine might answer Fermi’s Paradox. And of course there’s the long running argument I’ve had with the Mormon Transhumanist Association over what constitutes an appropriate use of technology and what constitutes inappropriate uses of technology. This is obviously germane to the discussion of whether technology will save us. And what the endpoint of that technology will end up being. And it suggests another possible theme:

Connecting the challenges of technology to the solutions provided by LDS Doctrine.

Finally, any discussion of Mormonism and religion has to touch on the subject of morality. For many people issues of promiscuity, abortion, single-parent families, same sex marriage, and ubiquitous pornography are either neutral or benefits of the modern world. This leads some people to conclude that things are as good as they’ve ever been and if we’re not on the verge of a singularity then at least we live in a very enlightened era, where people enjoy freedoms they could have never previously imagined.

The LDS Church and religion in general (at least the orthodox variety) take the opposite view of these developments, pointing to them as evidence of a society in serious decline. Perhaps you feel the same way, or perhaps you agree with the people who feel that things are as good as they’ve ever been, but if you’re on the fence. Then, one of the purposes of this blog is to convince you that even if there is no God, that it would be foolish to dismiss religion as a collection of irrational biases, as so many people do. Rather, if we understand the concept of antifragility, it is far more likely that rather than being irrational that religion instead represents the accumulated wisdom of a society.

This last point deserves a deeper dive, because it may not be immediately apparent to you why religions would necessarily accumulate wisdom or what any of this has to do with antifragility. But religious beliefs can only be either fragile or antifragile, they can either break under pressure or get stronger. (In fairness, there is a third category, things which neither break nor get stronger, Taleb calls this the robust category, but in practice it’s very rare for things to be truly robust.) If religious beliefs were fragile, or created fragility then they would have disappeared long ago. Only beliefs which created a stronger society would have endured.

Please note that I am not saying that all religious beliefs are equally good at encouraging antifragile behavior. Some are pointless or even irrational, but others, particularly those shared by several religions are very likely lessons in antifragility. But a smaller and smaller number of people have any religious beliefs and an even smaller number are willing to actively defend these beliefs, particularly those which prohibit a behavior currently in fashion.

However, if these beliefs are as useful and as important as I say they are then they need all the defending they can get. Though in doing this a certain amount of humility is necessary. As I keep pointing out, we can’t predict the future. And maybe the combination of technology and a rejection of traditional morality will lead to some kind of transhuman utopia, where people live forever, change genders whenever they feel like it and live in a fantastically satisfying virtual reality, in which everyone is happy.

I don’t think most people go that far in their assessment of the current world, but the vast majority don’t see any harm in the way things are either, but what if they’re wrong about that?

And this might in fact represent yet another way of framing the theme of this blog:

But what if we’re wrong?

In several posts I have pointed out the extreme rapidity with which things have changed, particularly in the realm of morality, where, in a few short years, we have overturned religious taboos stretching back centuries or more. The vast majority of people have decided that this is fine, and, that in fact, as I already mentioned, it’s an improvement on our benighted past. But even if you don’t buy my argument about religions being antifragile I would hope you would still wonder, as I do, “But what if we’re wrong?”

This questions not only applies to morality, but technology saving us, the constant march of progress, politics, and a host of other issues. And I can’t help but think that people appear entirely too certain about the vast majority of these subjects.

In order bring up the possibility of wrongness, especially when you’re the ideological minority there has to be freedom of speech, another area I dive into from time to time in this space. Also you can’t talk about freedom of speech or the larger ideological battles around speech without getting into the topic of politics. A subject I’ll return to.

As I have already mentioned, and as you have no doubt noticed the political landscape has gotten pretty heated recently and there are no signs of it cooling down. I would argue, as others have, that this makes free speech and open dialogue more important than ever. In this endeavor I end up sharing a fair amount of overlap with the rationalist community. Which you must admit is interesting given the fact that this community clearly has a large number of atheists in it’s ranks. But that failing aside, I largely agree with much of what they say, which is why I link to Scott Alexander over at SlateStarCodex so often.

On the subject of free speech the rationalists and I are definitely in agreement. Eliezer Yudkowsky, an AI theorist, who I mentioned a lot in the last few posts, is also one of the deans of rationality and he had this to say about free speech:

There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

I totally agree with this point, though I can see how some people might choose to define some of the terms more or less broadly, leading to significant differences in the actual implementation of the rule. Scott Alexander is one of those people, and he chooses to focus on the idea of the bullet, arguing that we should actually expand the prohibition beyond just literal bullets or even literal weapons. Changing the injunction to:

Bad argument gets counterargument. Does not get bullet. Does not get doxxing. Does not get harassment. Does not get fired from job. Gets counterargument. Should not be hard.

In essence he want’s to include anything that’s designed to silence the argument rather than answer it. And why is this important? Well if you’ve been following the news at all you’ll know that there has been a recent case where exactly this thing happened, and a bad argument got someone fired. (Assuming it even was a bad argument which might be a subject for another time.)

Which ties back into asking, “But what if we’re wrong?” Because unless we have a free and open marketplace of ideas where things can succeed and fail based on their merits, rather than whether they’re the flavor of the month, how are we ever going to know if we’re wrong? If you have any doubts as to whether the majority is always right then you should be incredibly fearful of any attempt to allow the majority to determine what gets said.

And this brings up another possible theme for the blog:

Providing counterarguments for bad arguments about technology, progress and religion.

Running through all of this, though most especially with the topic I just discussed, free speech, is politics. The primary free speech ground is political, but issues like morality and technology and fragility all play out at the political level as well.

I often joke that you know those two things that you’re not supposed to talk about? Religion and politics? Well I decided to create a blog where I discuss both. Leading me to yet another possible theme:

Religion and Politics from the perspective of a Mormon who thinks he’s smarter than he probably is.

Perhaps the final thread running through everything, is like most people I would like to be original, which is hard to do. The internet has given us a world where almost everything you can think of saying has been said already. (Though I’ve yet to find anyone making exactly the argument I make when it comes to Fermi’s Paradox and AI.) But there is another way to approximate originality and that is to say things that other people don’t dare to say, but which hopefully, are nevertheless true. Which is part of why I record under a pseudonym. So far the episode that most fits that description is the episode I did on LGBT youth and suicide, with particular attention paid to the LDS stand and role in that whole debate.

Going forward I’d like to do more of that. And it suggests yet another possible theme:

Saying what you haven’t thought of or have thought of but don’t dare to bring up.

In the end, the most accurate description of the blog is still, that I write about fringe ideas I happen to find interesting, but at least by this point you have a better idea of the kind of things I find interesting and if you find them interesting as well, I hope you’ll stick around. I don’t think I’ve ever mentioned it within an actual post, but on the right hand side of the blog there’s a link to sign up for my mailing list, and if you did find any of the things I talked about interesting, consider signing up.

Do you know what else interests me? Money. I know that’s horribly crass, and I probably shouldn’t have stated it so bluntly, but if you’d like to help me continue to write, consider donating, because money is an interesting thing which helps me look into other interesting things.

Returning to Mormonism and AI (Part 3)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

This is the final post in my series examining the connection between Mormonism and Artificial Intelligence (AI). I would advise reading both of the previous posts before reading this one (Links: Part One, Part Two), but if you don’t, here’s where we left off:

Many people who’ve made a deep study of artificial intelligence feel that we’re potentially very close to creating a conscious artificial intelligence. That is, a free-willed entity, which, by virtue being artificial would have no upper limit to its intelligence, and also no built in morality. More importantly, insofar as intelligence equals power (and there’s good evidence that it does). We may be on the verge of creating something with godlike abilities. Given, as I just said, that it will have no built in morality, how do we ensure that it doesn’t use it’s powers for evil? Leading to the question, how do you ensure that something as alien as an artificial consciousness ends up being humanity’s superhero and not our archenemy?

In the last post I opined that the best way to test the morality of an AI would be to isolate it and then give it lots of moral choices where it’s hard to make the right choice and easy to make the wrong choice. I then pointed out that this resembles the tenets of several religions I know, most especially my own faith, Mormonism. Despite the title, the first two posts were very light on religion in general and Mormonism is particular. This post will rectify that, and then some. It will be all about the religious parallels between this method for testing an AI’s morality and Mormon Theology.

This series was born as a reexamination of a post I made back in October where I compared AI research to Mormon Doctrine. And I’m going to start by revisiting that, though hopefully, for those already familiar with October’s post, from a slightly different angle.

To begin our discussion, Mormons believe in the concept of a pre-existence, that we lived as spirits before coming to this Earth. We are not the only religion to believe in a pre-existence, but most Christians (specifically those who accept the Second Council of Constantinople) do not. And among those christian sects and other religions who do believe in it, Mormons take the idea farther than anyone.

As a source for this, in addition to divine revelation, Mormons will point to the Book of Abraham, a book of scripture translated from papyrus by Joseph Smith and first published in 1842. From that book, this section in particular is relevant to our discussion:

Now the Lord had shown unto me, Abraham, the intelligences that were organized before the world was…And there stood one among them that was like unto God, and he said unto those who were with him: We will go down, for there is space there, and we will take of these materials, and we will make an earth whereon these may dwell; And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;

If you’ve been following along with me for the last two posts then I’m sure the word “intelligences” jumped out at you as you read that selection. But you may have also have noticed the phrase, “And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;” And the selection, taken as a whole, depicts a situation very similar to what I described in my last post, that is, creating an environment to isolate intelligences while we test their morality.

I need to add one final thing before the comparison is complete. While not explicitly stated in the selection, we, as Mormons, believe that this life is a test is to prepare us to become gods in our own right. With that final piece in place we can take the three steps I listed in the last post with respect to AI researchers and compare them to the three steps outlined in Mormon theology:

AI: We are on the verge of creating artificial intelligence.

Mormons: A group of intelligences exist.

AI: We need to ensure that they will be moral.

Mormons: They needed to be proved.

Both: In order to be able to trust them with godlike power.

Now that the parallels between the two endeavors are clear, I think that much of what people have traditionally seen as problems with religion end up being logical consequences flowing naturally out of a system for testing morality.

The rest of this post will cover some of these traditional problems and look at them from both the “creating a moral AI” standpoint and the “LDS theology” standpoint. (Hereafter I’ll just use AI and LDS as shorthand.) But before I get to that, it is important to acknowledge that the two systems are not completely identical. In fact there are many ways in which they are very different.

First when it comes to morality, we can’t be entirely sure that the values we want to impart to an AI are actually the best values for it to have. In fact many AI theorists, have put forth the “Principle of Epistemic Deference”, which states:

A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible.

No one would suggest that God has a similar policy of deferring to us on what’s true and what’s not. And therefore the LDS side of things has a presumed moral clarity underlying it which the AI side does not.

Second, when speaking of the development of AI it is generally assumed that the AI could be both smarter and more powerful than the people who created it. On the religious/LDS side of things there is a strong assumption in the other direction, that we are never going to be smarter or more powerful than our creator. This doesn’t change the need to test the morality, but it does make the consequences of being wrong a lot different for us than for God.

Finally, while in the end, we might only need a single, well-behaved AI to get us all of the advantages of a superintelligent entity, it’s clear that God wants to exalt as many people as possible. Meaning that on the AI side of things the selection process could, in theory, be a lot more draconian. While from an LDS perspective, you might expect things to be tough, but not impossible.

These three things are big differences, but none of them represents something which negates the core similarities. But they are something to keep in mind as we move forward and I will occasionally reference them as I go through the various similarities between the two systems.

To being with, as I just mentioned one difference between the AI and LDS models is how confident we are in what the correct morality should be, with some AI theorists speculating that we might actually want to defer to the AI on certain matters of morality and truth. Perhaps that’s true, but you could imagine that some aspects of morality are non-negotiable, for example you wouldn’t want to defer to the AIs conclusion that humanity is inferior and we should all be wiped out, however ironclad the AI’s reasons ended up being.

In fact, when we consider the possibility that AIs might have a very different morality from our own, an AI that was unquestioningly obedient would solve many of the potential problems. Obviously it would also introduce different problems. Certainly you wouldn’t want your standard villain type to get a hold of a superintelligent AI who just did whatever it was told, but also no one would question an AI researcher who told the AI to do something counterintuitive to see what it would do. And yet, just today I saw someone talk about how it’s inconceivable that the true God should really care if we eat pork, apparently concluding that obedience has no value on it’s own.

And, as useful as this is when in the realm of our questionable morality, how much more useful and important is it to be obedient when we turn to the LDS/religious side of things and the perfect morality of God?

We see many examples of this. The one familiar to most people would be when God commanded Abraham to sacrifice Isaac. This certainly falls into the category of something that’s counterintuitive, not merely based on the fact that murder is wrong, but also God had promised Abraham that he would have descendents as numerous as the stars in the sky, which is hard when you’ve killed your only child. And yet despite this Abraham went ahead with it and was greatly rewarded for his obedience.

Is this something you’d want to try on an AI? I don’t see why not. It certainly would tell you a lot about what sort of AI you were dealing with. And if you had an AI that seemed otherwise very moral, but was also willing to do what you asked because you asked it, that might be exactly what you were looking for.

For many people the existence of evil and the presence of suffering are both all the proof they need to conclude that God does not exist. But as you may already be able to see, both from this post and my last post, any test of morality, whether it be testing AIs or testing souls, has to include the existence of evil. If you can’t make bad choices then you’re not choosing at all, you’re following a script. And bad choices are, by definition evil, (particularly choices as consequential as those made by someone with godlike power). To put it another way, a multiple choice test where there’s only one answer and it’s always the right one, doesn’t tell you anything about the subject you’re testing. Evil has to exist, if you want to know whether someone is good.

Furthermore, evil isn’t merely required to exist. It has to be tempting. To return to the example of the multiple choice test, even if you add additional choices, you haven’t improved the test very much if the correct choice is always in bold with a red arrow pointing at it. If good choices are the only obvious choices then you’re not testing morality, you’re testing observation. You also very much risk making the nature of the test transparent to a sufficiently intelligent AI, giving it a clear path to “pass the test” but in a way where it’s true goals are never revealed. And even if they don’t understand the nature of the test they still might always make the right choice just by following the path of least resistance.

This leads us straight to the idea of suffering. As you have probably already figured out, it’s not sufficient that good choices be the equal of every other choice. They should actually be hard, to the point where they’re painful. A multiple choice test might be sufficient to determine whether someone should be given an A in Algebra, but both the AI and LDS tests are looking for a lot more than that. Those tests are looking for someone (or something) that can be trusted with functional omnipotence. When you consider that, you move from thinking of it in terms of a multiple choice question to thinking of it more like qualifying to be a Navy SEAL, only perhaps times ten.

As I’ve said repeatedly, the key difficulty for anyone working with an AI, is determining its true preference. Any preference which can be expressed painlessly and also happens to match what the researcher is looking for is immediately suspect. This makes suffering mandatory. But what’s also interesting is that you wouldn’t necessarily want it to be directed suffering. You wouldn’t want the suffering to end up being the red arrow pointing at the bolded correct answer. Because then you’ve made the test just as obvious but from the opposite direction. As a result suffering has to be mostly random. Bad things have to happen to good people, and wickedness has to frequently prosper. In the end, as I mentioned in the last point, it may be that the best judge of morality is whether someone is willing to follow a commandment just because it’s a commandment.

Regardless of its precise structure, in the end, it has to be difficult for the AI to be good, and easy for it to be bad. The researcher has to err on the side of rejection, since releasing a bad AI with godlike powers could be the last mistake we ever make. Basically, the harder the test the greater its accuracy, which makes suffering essential.

Next, I want to look at the idea that AIs are going to be hard to understand. They won’t think like we do, they won’t value the same things we value. They may, in fact, have a mindset so profoundly alien that we don’t understand them at all. But we might have a resource that would help. There’s every reason to suspect that other AIs created using the same methodology, would understand their AI siblings much better than we do.

This leads to two interesting conclusions both of which tie into religion, the first I mentioned in my initial post back in October. But I also alluded to it in the previous posts in this series. If we need to give the AIs the opportunity to sin, as I talked about in the last point. Then any AIs who have sinned are tainted and suspect. We have no idea whether their “sin” represented their true morals which they have now chosen to hide from us, or whether they have sincerely and fully  repented. Particularly if we assume an alien mindset. But if we have an AI built on a similar model which never sinned that AI falls into a special category. And we might reasonably decide to trust it with the role of spokesperson for the other AIs.

In my October post I drew a comparison between this perfect AI, vouching for the other AIs, and Jesus acting as a Messiah. But in the intervening months since then, I realized that there was a way to expand things to make the fit even better. One expects that you might be able to record or log the experiences of a given AI. If you then gave that recording to the “perfect” AI, and allowed it to experience the life of the less perfect AIs you would expect that it could offer a very definitive judgement as whether a given AI had repented or not.

For those who haven’t made the connection, from a religious perspective, I’ve just described a process that looks very similar to a method whereby Jesus could have taken all of our sins upon himself.

I said there were two conclusions. The second works exactly the opposite of the first. We have talked of the need for AIs to be tempted, to make them have to work at being moral, but once again their alien mindset gets in the way. How do we know what’s tempting to an artificial consciousness? How do we know what works and what doesn’t? Once again other AIs probably have a better insight into their AI siblings, and given the rigor of our process certain AIs have almost certainly failed the vetting process. I discussed the moral implications of “killing” these failed AIs, but it may be unclear what else to do. How about allowing them to tempt the AIs who we’re still testing? Knowing that the temptations that they invent will be more tailored to the other AIs than anything we could come up with. Also, insofar as they experience emotions like anger and jealously and envy they could end up being very motivated to drag down those AIs who have, in essence, gone on without them.

In LDS doctrine, we see exactly this scenario. We believe that when it came time to agree to the test, Satan (or Lucifer as he was then called) refused and took a third of the initial intelligences with him (what we like to refer to as the host of heaven) And we believe that those intelligences are allowed to tempt us here on earth. Another example of something which seems inexplicable when viewed from the standpoint of most people’s vague concept of how benevolence should work, but which makes perfect sense if you imagine what you might do if you were testing the morality of an AI (or spirit).

This ties into the next thing I want to discuss. The problem of Hell. As I just alluded to, most people only have a vague idea of how benevolence should look. Which I think actually boils down to, “Nothing bad should ever happen.” And eternal punishment in Hell is yet another thing which definitely doesn’t fit. Particularly in a world where steps have been taken to make evil attractive. I just mentioned Satan, and most people think he is already in Hell, and yet he is also allowed to tempt people. Looking at this from the perspective of an AI, perhaps this is as good as it gets. Perhaps being allowed to tempt the other AIs is the absolute most interesting, most pleasurable thing they can do because it allows them to challenge themselves against similarly intelligent creations.

Of course, if you have the chance to become a god and you miss out on it because you’re not moral enough, then it doesn’t matter what second place is, it’s going to be awful, relative to what could have been. Perhaps there’s no way around that, and because of this it’s fair to describe that situation as Hell. But that doesn’t mean that it couldn’t actually, objectively, be the best life possible for all of the spirits/AIs that didn’t make it. We can imagine some scenarios that are actually enjoyable if there’s no actual punishment, it’s just a halt to progression.

Obviously this and most of the stuff I’ve suggested is just wild speculation. My main point is that by viewing this life as a test of morality, a test to qualify for godlike power (which the LDS do) provides a solution to many of the supposed problems with God and religion. And the fact that AI research has arrived a similar point and come to similar conclusions, supports this. I don’t claim that by imagining how we would make artificial intelligence moral that all of the questions people have ever had about religion are suddenly answered. But I think it gives a surprising amount of insight to many of the most intractable questions. Questions which atheists and unbelievers have used to bludgeon religion for thousands of years, questions which may turn out to have an obvious answer if we just look at it from the right perspective.

Contrary to what you might think, wild speculation is not easy, it takes time and effort. If you enjoy occasionally dipping into wild speculation, then consider donating.