Month: <span>April 2018</span>

Steven Pinker and the Fragility of Enlightenment

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

Imagine someone you know with a solid 9-5 job: they make great money working at a company that’s been around for a long time.  In this situation have they solved the problem of money or not? They probably feel like they have, and at first glance we might agree, but to be sure you’d need to know a lot more. For example here are four different people who all fit that description, but who are, behind the scenes, very different.

Example 1: This person saves a portion of their income, lives modestly, has the recommended amount of retirement savings if not more, and is constantly seeking out new skills and making sure that their resume looks good. They don’t expect to be laid off, but if it should happen they’re prepared.  All that said their current job looks good, and they’ve gotten a raise every year and a promotion every five years or so.

Example 2: This person basically spends what they earn. Has some debt, but not a crazy amount, contributes the minimum to their 401k, and could probably find another job if they suddenly had to, but hasn’t given much thought to it, because their current job looks good, and they’ve gotten a raise every year and a promotion every five years or so.

Example 3: This person spends more than they make. Their debt is on a steady upward trend, but they’re still several years away from being in trouble. They have no retirement savings, and they’ve pulled every ounce of equity they can out of their house. They’re not worried because, their current job looks good, and they’ve gotten a raise every year and a promotion every five years or so.

Example 4: This person has a job that’s going through a temporary boom. Say, someone working in the mortgage industry just before the crash, or one of Jordan Belfort’s traders at the height of his time as the Wolf of Wall Street. This is someone who spends every penny and then some, figuring that not only will this boom last forever, but that the curve will just keep going up, and why not? Their income has gone up by 50% if not more the last couple of years. People have told them that this is unsustainable, but they’ve just ignored them. “Who needs that kind of negative energy in your life?”

By certain metrics every one of these individuals is doing great, the lines all go up, and in some cases (like example 4) nearly straight up. On top of this, there’s every reason to think the lines are just going to keep going up. But under the hood these four examples, are of course, very different. I’ve just finished Steven Pinker’s book, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress, and the way Pinker describes the world is very similar to the way you might describe someone with a very solid 9-5 job. The person with the job has solved the problem of money, and that problem is likely to continue to be solved (after all they’ve gotten a raise every year and a promotion every five years or so.) In a similar fashion, using reason and science humanity has solved the problems of deprivation and violence. Everything has been trending up for centuries and there’s no reason to suspect it won’t continue that way.

The first half or so of Pinker’s book is all about describing how great this 9-5 job is. How consistent the raises have been, how much less time people have to work than in the bad old days when the company was a startup. All the perks the employees get, like catered meals and massages and great healthcare. And you know what? I agree with all of this. His case is very persuasive, and loaded with graphs and figures. Humanity has a great time of it the last several centuries by nearly every important measure. The question is what’s been going on behind the scenes. Is humanity the first example, living modestly with a nice amount of savings, well prepared for sudden black swans and other reversals? Or are we example two, in a situation where we could probably buckle down a little bit more, but not doing anything grossly irresponsible?

The argument I’ll be making in this post is that humanity is not the cautious person in the first example, or even the second person spending what we make and no more, with a manageable amount of debt. My argument will be that humanity is best compared to the person in the third example and there’s even reason to suspect that the fourth example might be the best description of all.

Before I get any farther I should say that I liked the book and I would definitely recommend reading it, particularly if your pessimism about the world is having a negative impact on your life. Pinker is a great writer and the case he makes for progress should cheer you up. If nothing else the chart on page 64 of lives saved by various modern scientists and their inventions is humbling. Karl Landsteiner is apparently at the top with a billion lives saved through his discovery of blood groups. Second is Abel Wolman and Linn Enslow with water chlorination which saved 177 million lives. And as I have said, as far as his initial point I have no argument with Pinker, humanity really does have a great 9-5 job. The time of being a poor college student is long gone. Things have gotten a lot better and are currently pretty fantastic. However…

In contrast to Pinker I do not think we have been very prudent with this 9-5 job. Also I think Pinker underestimates the chances of getting “laid off”. As I said when I talked about his book Better Angels of Our Nature Pinker seems to definitely be of the opinion that we are saved, and in “Enlightenment Now” he doubles down on that theme, a theme which directly contradicts the theme of this blog, namely that technology and progress have not saved us. If Pinker’s book so well researched and so full of graphs showing that progress has done all these wonderful things for humanity and will likely continue, then who am I to argue that he’s wrong?

Probably no one of any consequence. And in fact as I have said before I hope he’s right, and that I’m wrong. But I’m worried that he’s overlooked some things, things which are more consequential than he gives them credit for.

To begin with everything he says hinges on the idea that the historical trends he has pointed out will continue into the future, but why? Certainly we’ve seen reversals of progress in the past, why is this time different? I can think of two reasons.

1- There is something mystical going on. History has an arc. Progress is a cosmic imperative.

I spoke about this idea in my Religion of Progress post, and to his credit, Pinker distances himself from it, at one point writing the following:

Social spending demonstrates an uncanny aspect of progress that we’ll encounter again in subsequent chapters. Though I am skittish about any notion of historical inevitability, cosmic forces, or mystical arcs of justice, some kinds of social change really do seem to be carried along by an inexorable tectonic force. As they proceed, certain factions oppose them hammer and tongs, but resistance turns out to be futile. Social spending is an example. The United States is famously resistant to anything smacking of redistribution. Yet it allocates 19 percent of its GDP to social services, and despite the best efforts of conservatives and libertarians the spending has continued to grow.

In the process of distancing himself from the first possibility he introduces the second possibility:

2- It’s not cosmic, but humanity has reached some sort of minimal threshold for collective wisdom which is sufficient to lock in progress even over strenuous opposition.

If this is the case, my question is, have we locked in progress or have we just locked in a leftward drift? In my post The Cultural War and The Overton Window I argued it was the latter, and pointed out that progress, particularly as Pinker defines it, is not the same thing as becoming increasingly radical leftists. In fact if you were to look at Pinker’s body of work you’d find that he frequently sets himself up in opposition to the radical Left. (Which is one of the reasons why I like him.) Despite this it’s not entirely clear where Pinker falls on this issue. Certainly he seems to be saying in many places that progress is “locked in”. But if that’s so, why did he feel the need to write a book subtitled “The Case For Reason, Science, Humanism and Progress”. Why does he need to make a case for things that are going to happen regardless of whether we oppose them or not, like the aforementioned social spending in the US?

If we take Pinker at his word, then it can’t be the first possibility. It would therefore have to be something along the lines of this second possibility, but it appears that Pinker wants to have his cake and eat it too. He wants to make the case that things have never better and are only going to continue to improve. That progress is something of an unstoppable force.  That humanities’ 9-5 job is entirely secure. While at the same time warning that if we’re too unreasonable, or too negative or even too religious that this unstoppable force will be derailed. The general sense one gets is that Pinker can be worried about threats to the modern world, but if anyone else worries about it, they’re engaged in progressophobia. In other words progress can be derailed but only by ignoring the things Pinker thinks are important.

The next obvious question is, what does Pinker think is important? What is he worried about? Well as you can imagine, like many people he’s worried about Trump and he has a list of 11 areas where he thinks that Trump and people like him threaten the onward march of progress. To be fair he does offer the following caveat to this worry:

Not even a congenital optimist can see a pony in this Christmas stocking. But will Donald Trump (and authoritarian populism more generally) really undo a quarter of a millennium of progress? There are reasons not to take the poison just yet. If a movement has proceeded for decades or centuries, there are probably systematic forces behind it, and many stakeholders with an interest in its not being precipitously reversed.

By design of the Founders, the American presidency is not a rotating monarchy. The president presides over a distributed network of power (denigrated by populists as the “deep state”) that outlasts individual leaders and carries out the business of government under real-world constraints which can’t easily be erased by popuist applause lines or the whims of the man at the top.

After laying out in exacting detail the 11 reasons for concern, his reversal comes across as hollow. I think he is correct about how it worked in the past but that was the past. Currently there is significant reason to be concerned that the power of the “deep state” to constrain the president is significantly weakened. And I believe it’s fair to say that a lot of that erosion came in the name of “progress” under people like FDR and Obama. Trump is dangerous not because he’s Trump (though that doesn’t help) he’s dangerous because technology and progress have created fragility, and in this case fragility consists of the progressive accumulation of power in the hands of a single individual, as an example of that power, immediately before the section I just quoted Pinker points out that:

Worst of all, the chain of command gives an American President enormous discretion over the use of nuclear weapons in a crisis…

I would argue that this is an example of a useful criticism of progress, of the kind Pinker either wants to minimize or not acknowledge at all. In our 9-5 example, you can certainly see where increasing centralization has lead to fragility which didn’t exist before. In other words, there’s a good chance that when you lose your job many people will have also lost their job at the same time. In similar fashion, as I pointed out in a previous post, a slowdown in China can cause pizza parlors to close in Chile. I saw very little if anything in the book which spoke to this sort of fragility. All of which is to say that if a single person can bring it all down, the eventually they will, Trump is not unique. I know Pinker tries to argue that Trump probably can’t bring it all crashing down, but “probably” isn’t good enough.

Another thing Pinker doesn’t want to talk about is government debt. (And of course there’s an increasing problem with debt at the state and local level as well.) The words “debt” and “deficit” do not appear in the index. I know there are people who disagree with me on this point, (and I’m sure at least one of them will leave a comment to that effect.) But you cannot deny that the world is much more interconnected than it once was. And that one mistake (most recently the mortgage crisis) can reverberate across the globe. This is different. Historically, China could be going through a golden age while at the same time Europe languished. And no matter how bad or good of a leader Genghis Khan was, nothing he did could have any impact on what happened in America.

As you might imagine from the book’s subtitle, Pinker’s chief worries are that we will abandon the reason and science that got us here. That we will turn away from the wonder of The Enlightenment. First I would argue that things beyond reason and science got us here, second I think Pinker places too much weight on The Enlightenment. As one of the reviews of the book points out:

“The only major claim not supported by a graph (or indeed much evidence of any kind) is the assertion that all this progress has something to do with the Enlightenment.” Pinker seems wedded to the Enlightenment like some of kind secular creation myth, and this results in to two particular problems with the book.

The first problem, which I’ll mention just briefly, is that Pinker seems to have engaged in an extended No True Scotsman fallacy: Or as the same review puts it:

…[H]e seems incapable of acknowledging the Enlightenment’s complicity in any social ills. Science, he acknowledges, “has often been pressed into the support of deplorable political movements” and “is commonly blamed for intellectual movements that has a pseudoscientific patina”. But beyond that: it’s not me guv. As William Davies wrote in the Guardian, “With some deft intellectual moves, he manages to position “enlightenment” and “science” on the right side of every argument or conflict, while every horror of the past 200 years is put down to ignorance, irrationality or “counter–enlightenment” trends.”

As I pointed out in a previous post there was a time when nothing was more progressive than communism. Pinker largely breezes by this particular ugly historical example, essentially arguing that no TRUE example of “progress” involves killing 100 million people. But in doing so he relies too much on a definition of progress which only looks backwards. Of course it’s not progress to kill a bunch of people. But everything that lead up to those deaths seemed very progressive at the time.

Before leaving Pinker’s defense of reason and science, I have one final point to make. If reason and science are in danger who has the power to cause them harm? When was the last time you heard of a liberal professor being fired? Or a liberal columnist? Give me a recent example of harm caused by conservative opposition to science. I imagine you’ll find it difficult because as far as I can tell there aren’t any. All of which is to say reality is considerably more nuanced than Pinker’s story of rationality triumphing over faith and superstition at the dawn of The Enlightenment, it’s triumph ensured as long as we can keep the religious yokels at bay.

This brings us to the second problem the review mentions, his opposition to religion. Which also opens things up to the other side of our discussion. What kind of things does Pinker not worry about?

Pinker definitely does not worry about a decline in religion, in fact he celebrates it. As I already mentioned, he takes several potshots at it in the course of the book. I intend to do a post soon on the value of religion (even if you remove the spiritual side of things) but for now it’s sufficient to point out that this is another example of Pinker’s lack of nuance. I think one review described it best:

“Opposing reason is, by definition, unreasonable.” Steven Pinker is fond of definitions. Early on in this monumental apologia for a currently fashionable version of Enlightenment thinking, he writes: “To take something on faith means to believe it without good reason, so by definition a faith in the existence of supernatural entities clashes with reason.” Well, it’s good to have that settled once and for all. There is no need to trouble yourself with the arguments of historians, anthropologists and evolutionary biologists, who treat religion as a highly complex phenomenon, serving a variety of human needs. All you need do is consult a dictionary, and you will find that religion is – by definition – irrational.

This is extremely reductionist, and if that’s where we’re at, for my own part, I might as well say that this argument is as silly as claiming that nothing which came before a given 9-5 job had anything to do with getting that job nor anything to do with keeping it. As far as I’ve stretched this analogy, this comparison contains at least as much useful knowledge as the idea that religion is by definition irrational, and probably more.

I find that I’m about out of space and I’ve only scratched the surface of things, so I want to return to the metaphor I began things with, the 9-5 job. I admit that humanity has a great 9-5 job. But if I’ve learned anything I know that employment can turn on a dime. And while I don’t expect western civilization to be quite as fragile as a 9-5 job. I do know that when you have a good job it doesn’t mean you should spend like crazy (see national debt, add to that energy usage), and it doesn’t mean that you should expand your lifestyle till you reach the limit of what your credit will support (virtually everything else humanity is doing), nor does it mean that there’s no chance of  suddenly losing that job.

Pinker seems to have a hard time distinguishing between preparation and pessimism. And maybe you can’t prepare without a certain amount of pessimism, especially about rare and potentially devastating events, or what are often called existential threats.

This is the final thing I want to discuss, and it’s another subject, along with religion, that Pinker’s not worried about, devoting an entire chapter to discussing how such fears are overblown. And I understand that these are complex topics, but it sounds a lot like someone saying, “I could never get laid off, I’m too valuable!” or “Enron will never go out of business, it’s a $70 billion dollar company!”. And despite these being complicated topics his treatment of them is pretty blase. In his discussion of AI Risk he appears to be more interested in attacking straw men than in legitimately engaging with the arguments. Going so far as to present the paperclip maximizer argument as a genuine worry rather than as a useful shorthand. He covers a few more potential threats with similar terseness, and concludes the only thing we have to worry about is nuclear war and climate change, and not very much even about those two risks.

Maybe he’s right, maybe all of the worries about existential risk is just needless pessimism that detracts from a glorious future where we use science and reason and technology to leave this planet behind and spread to the stars in a perfect progressive future. But if that’s the case, then as with so many things I discuss in this space, we return to Enrico Fermi’s question, “Where is everybody?” (Pinker mentions Fermi’s Paradox, only once, parenthetically and dismissively.) If unlimited progress is as unstoppable as Pinker claims, if all it takes is sufficient veneration for reason and science to conquer the world and from there the universe, why hasn’t anyone done it already?

We’ve had a good run, and hopefully it will continue. There’s a lot of evidence that it won’t, and ignoring that evidence may be the least reasonable and scientific thing of all.

Somewhere way, way down the list of things that are both reasonable and scientific is donating to this blog. But you have to ask yourself, do you want civilization to end? If not, consider donating.

Aryans, and Errors, and Indo-Europeans, Oh My!

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

One of the presentations from the MTA conference, on which I have spilled so much virtual ink, was a presentation on anthropology. During that presentation Franz Boas was mentioned, in approving tones, as someone who had done much to banish the racist notions of previous anthropologists. Now I don’t know much about Boas, but I had encountered his name recently, as someone peripherally connected to a new book getting coverage in most of the big newspapers: David Reich’s, Who We Are and How We Got Here. And, no I haven’t read it yet, (my copy arrived on Monday) but the book and it’s claims speak directly to a subject I covered in August of last year, after reading another book, In Search of the Indo-Europeans, by J.P. Mallory.

At the time I wrote this previous post, I had the strong sense that I was walking into a hugely controversial topic, one where I was definitely out of my depth, a topic which had been buried for awhile. But it wouldn’t have been the first time (or the last) so I didn’t let it deter me. My sense was that the subject was analogous to an old family scandal that no one talks about any longer, but simmers behind many intra-family interactions, and in this specific case, had something to do with Nazis and Aryans and the master race. Reich’s book, or at least the coverage and reviews surrounding it, have brought this “scandal” into clearer focus. At least that’s the impression I get from the coverage. I want to re-emphasis up front, that I’m definitely on the periphery of things. But, since Reich’s book appears to support several of the conclusions I drew in that last post, I thought it would be worth revisiting.

In that last post, I alluded to the fact that there is a debate between two anthropological schools of thought. (Again, as far as I can tell from the limited things I have read.) Both schools of thought agree that there was a nation or a race or what have you (maybe a loose confederation) of Indo-Europeans, or proto-Indo-Europeans (PIEs). The question that people have been grappling with every since Europeans realized that nearly all languages spoken on their continent stemmed from a single source, is how did one language become so widespread? The answer to this question is what divides the two schools.

One school, maintains that it was spread through violence and conquest. That sometime in the unrecorded past the PIEs spread out from Central Asia conquering as they went, and over the next few thousand years they and their descendents ended up dominating nearly all of Europe, Asia Minor and much of the Indian subcontinent. How great of an empire they actually possessed or how cohesive it was is a matter of pure speculation, but whatever the case, outside of the Far East and Arabic speaking countries, today nearly every nation has an Indo-Europeon language as one of its official languages.

The second theory is that there was no great conquering event, no horde of Indo-Europeans who surged out in a series of conquering waves thousands of years ago. That what you had rather was a very successful example of trade, and cultural exchange. That the Indo-European language spread as far as it did largely by peaceful means. Some people have called this theory Pots not People, indicating that when artifacts of a similar design were found over vast distances (in this case the corded ware pottery) that it indicated trade in those pots, not that the people using these pots were all related to one another. That when you saw things being transmitted they were generally transmitted by culture not genetics.

It is my understanding, particularly in light of the abuse the first theory was subjected to by the Nazi’s,  that the second theory has been dominate for many decades. I alluded to all of this in my last post and said that it seemed obvious to me that the spread of the PIEs was violent. (I used the term, “conquer everything in sight.) The point of all of this, is that Professor Reich’s book backs me up.  As he basically claims that whatever indigenous populations existed before the PIEs, were almost entirely wiped out by them. Reich is a Harvard geneticist and he comes to this conclusion through conducting stunningly expansive DNA testing on ancient human remains.

Today, the peoples of West Eurasia—the vast region spanning Europe, the Near East, and much of central Asia—are genetically highly similar. The physical similarity of West Eurasian populations was recognized in the eighteenth century by scholars who classified the people of West Eurasia as “Caucasoids” to differentiate them from East Asian “Mongoloids,” sub-Saharan African “Negroids,” and “Australoids” of Australia and New Guinea…. [P]opulations within West Eurasia are typically around seven times more similar to one another than West Eurasians are to East Asians. When frequencies of mutations are plotted on a map, West Eurasia appears homogeneous, from the Atlantic façade of Europe to the steppes of central Asia. There is a sharp gradient of change in central Asia before another region of homogeneity is reached in East Asia….

One doesn’t even have to read between the lines to see a road that leads us right back into the Nazi/Aryan camp. And in fact it’s my understanding that some German scientists withdrew from the study as the implications became clearer. Meaning we’re left with a giant dump of new science with potentially troubling implications. If we assume that everything that Reich says is correct (and I see no reason for doubting it) what should we do with it? Obviously there are people who will use it in a way most find distasteful, and by all accounts Reich has bent over backwards in an attempt to avoid having his book made into a weapon to be used in the current culture war, for example avoiding the word “race” and using “ancestry” instead. And who could blame him? Despite these efforts he’s definitely taken some heat, and I’ll let you be the judge of whether he deserved it.

It is not my intention with this post to wade into that particular conflict, rather I want to look at how archaeologists and anthropologists like Franz Boas, who championed culture over conquest and genetic replacement, ended up being wrong about this issue. How articles on the spread of the Indo-European language could describe it as spreading peacefully, when Reich’s research makes it clear that it was anything but. (Reich’s research actually reveals strong invasive male replacement, implying all the indigenous males were killed and all the indigenous females were raped.)

Surely the error comes in part from the tension between our desire to find the truth and our desire to believe what makes us feel good, to believe what matches our prejudices. After World War II we were strongly prejudiced against anything the Nazis believed, and (selectively) finding evidence which disproved their favorite theory made us feel good. All of this is understandable and perhaps even expected, and mostly we just hope that, eventually the scientific method will work and that over a long enough time horizon truth will prevail over error. That said, this subject, along with things like doubts about human adult neuroplasticity, and of course the replication crisis in general show that science has by no means banished error even within its own walls.

(To say nothing of banishing it in the general population. We’re a long way away from eliminating all flat-earthers, and it’s my understanding that the numbers have been increasing.)

There’s several ways things could play out in the long run:

1- On balance we could end up with more truth than falsehood as errors are perpetually discovered and corrected. The trend line might be a little jagged but in the long term it always points up. Progress and enlightenment will continue until we reach our glorious transhuman future.

Many people who’ve thought about the issue deeply fall into this camp. Including Steven Pinker, whose book Enlightenment Now I’m hoping to cover in this space next week. That yes, there are frequent setbacks, but if you look at things since the Enlightenment, everything has been getting much better and will continue to do so.

If this is the case there’s no need to worry. But in order for it to be the case, errors do have to be corrected, and therefore anyone who points out errors, or who even think they’ve spotted an error should be given space to speak. In other words we have to love truth so much that we’re willing to put up with a lot to make sure that it gets out there. That’s definitely not the way things seem to be going. In fact I’ve read reviews of Reich’s book where people make the claim that what he’s saying is true but that the potential divisiveness outweighs the scientific value.

2- We are at the end of the scientific era, and as science becomes less able to deliver material progress and change we’ll become less convinced of the need for this sort of error checking and the tide will turn so to speak.

This possibility relies on the idea that the scientific revolution is part of a cycle, a cycle which could be reversed. Obviously there are cyclical forces in history, and the question is whether an increased emphasis on science and rationality is one of them. If it is, then, while it seems that we have entered a period of permanent, ever expanding upward progress, we are just at the top of the wave. And yes it’s been the biggest wave in history, but it’s finally about to crest.

On Ecosophia, John Michael Greer’s blog, he has positited something along these lines. In his estimation history cycles between eras of abstraction and reflection, and he is of the opinion that we’re nearing the end of an era of abstraction. In his words:

An age of abstraction dawns when a handful of well-chosen abstract generalizations offer a sudden, startlingly clear glimpse at the shape of some corner of the cosmos. The first great triumphs of modern science played that role in kickstarting the era of abstraction now winding to its end, just as geometry did in ancient Greek times and scholastic logic did in the Middle Ages.

Then at some point the abstractions start to lose their value as they become increasingly unable to describe the messiness of reality, at which point an era of reflection dawns, where:

…the achievements of the departed era of abstraction get sorted through and assessed, the good bits kept, the useless bits chucked, and the habit of letting some pompous windbag with credentials tell the rest of the world what the absolute truth is this week gets a well-deserved rest.

I’m aware of the fact that viewing history as an oscillation between high-level abstraction and inward-looking reflection, is itself something of a high-level abstraction, but I think it describes the world in a way that makes a lot of sense as I survey what’s going on and we’ll come back to it in a second.  

3- In general truth will accumulate at a faster rate than error, but one error will be big enough that it won’t matter.

The classic example of this is nukes. The error being that we thought it was safe to keep them around. Now it may be that given the fears and prejudices and game theory involved that there was nothing we could do to avoid or correct this error, but however true that is, it won’t matter if civilization ends because of it. Another example is global warming. As I said in a previous post, I personally do not think that global warming has the power to, entirely on it’s own, reverse progress. To say nothing of killing everyone, but some people do. If we decide to get more speculative, then we have all sorts of potential errors which people claim might be the error that overwhelms all of our accumulated truth.

I’ve covered many of these potential errors in past posts. And everything from job automation to AI are potential candidates, as is of course the one I’ve already mentioned, a reduction in the free exchange of ideas, though this is something of a meta-error. With all such errors (and I’m sure most people have their favorite) the key thing to remember is not the details or likelyhood of any one error, but the fact that it’s only in the last hundred years or so that option 3 has become possible. Somehow in our efforts to combat error we’ve reached a point where it’s conceivable that a single error could undo everything.  

(One variant of this idea may be that we’re coordinated enough to create the error, but not coordinated enough to fix it.)

At this point it may seem that we’re pretty far afield from where we started, but even errors made by anthropologists about the distant past can be usefully examined in this context. First because even if they seem benign, it’s a useful snapshot of the current health of our error correction system, and by extension a snapshot of whether progress is continuing. Second, I’m not sure they’re as benign as you think. I know they seem inconsequential, but this is precisely what makes them so pernicious.

As to the first, my sense, is that the snapshot it provides is of a system not as healthy as I would like. Yes we found an error and now it’s corrected. The system worked as it should have. Or at least that’s one way of looking at it. But how sure are we that this is a correction which will stick? Given the controversy the book is already engendering, are we sure that anthropology departments will immediately switch to teaching based on Reich’s data? I really have my doubts about that. In fact if I had to make a prediction I’d say that Reich’s book will largely be ignored in anthropology departments (Particularly given that Reich himself is a geneticist) and that whatever errors Riech’s data could be used to correct will remain uncorrected.

Moving outward from this specific topic, what are we to make of the replication crisis? Is it an example of a healthy truth gathering system which quickly discovers error and exposes it? At first glance it would appear so, but once again I think if we dig deeper, it’s more emblematic of the problem then the cure. In my opinion the replication crisis isn’t happening because the world is full of bad scientists who are intentionally cooking the books (though there have been a few of those) it’s happening because we’re reaching the limits of what experimentation can tell us, particularly the kinds of experiments that can actually be pulled off by a college social science department.

Perhaps this is just a temporary mis-match between what we want to know and the resources available for finding those things out. And maybe if Mark Zuckerberg wanted to, Facebook could discover once and for all whether ego depletion is a thing, whether power poses work, or whether hearing old words makes you walk slower. (All things which have failed to replicate.) But my sense is that even if you throw more resources at the problem we’re still reaching the end of what we can know in several important areas, particularly those involving the study of humans, since they often present a moving target. Though to be fair even in chemistry and physics there is significant failure to reproduce with over 80% of chemists and nearly 70% of physicists reporting that they had failed to reproduce someone else’s work.

If we have reached a state where additional truths get harder and harder to come by, then that’s bad news for the entire concept of progress. Because there’s no evidence of a similar slowdown in the rate of error creation. We have been in a situation were one bad error could wipe out progress since at least the 50s, and we may be entering an era where truth can’t even stay ahead of falsehood in the aggregate. Now obviously this is a strong claim, but also one that’s not entirely out of the realm of possibility either. Still, I wouldn’t blame you if you thought I’d gone too far. Thus let’s set that aside for now and return to the two schools of thought concerning the Indo-Europeans I began the post with.

As a reminder there’s expansion through conquest and genetic replacement, and there’s expansion through culture. As you might have guessed, the conquest school of thought came first. Meaning that we had it right, and then as people spent more time thinking about it they managed to dismiss the correct answer and talk themselves into the wrong answer. This is not how science is supposed to work, at least according to some of it’s more vociferous defenders. According to them humanity has been saddled with this large body of superstitions and myths, and science comes along and gradually wipes them all out, until pure reason has triumphed everywhere. Science is not supposed to go backwards. I could probably spend an entire post talking about this and maybe I will someday, especially since I don’t think that what we’re currently discussing is the only example of it happening. There appears to be a broader trend.

Returning to the reflection/abstraction cycle of Greer, in that model the replication crisis came about because we tried to over apply a useful abstraction. It may be that humanity is like a new college student who, being shown the power of a concept they’ve never heard of,  immediately goes forward, flush with this new knowledge, and tries to apply it to everything. I understand the situation is complicated, but this also sounds like what happened in anthropology. New concepts like the power of culture, the Blank Slate ideology and a desire to distance themselves from the evil that had been done in connection with the older theory, lead scientist to reject the previous, correct theory, in favor of one that matched their current favorite abstraction. You can almost hear a college student talking about how now that they understand intersectionality, “Everything makes so much sense!” In my opinion something very similar happened to science, and specifically anthropology once they understood the power of culture and the evils of racism the non-violent spread of the PIEs became obvious!

Where does all this leave us? If you’re anything like me you may have some concerns that pleasant and comforting abstractions have overwhelmed actual science and that even when scientists say, no, we had it right the first time, the reason we all speak an Indo-European language is because they murdered all the people who spoke other languages. Those scientists will be ignored because something…something… Nazis! But even if this precise error is rectified, it does not mean that in the larger scheme of things that truth will automatically triumph over error. And it’s becoming increasingly evident that the battle between the two is closer than I would like.

Am I on the side of truth? I hope so. If you agree consider donating. If you don’t agree consider commenting, you’ll have plenty of company.

Mormon Transhumanists, so Close, but yet so Far

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

I’m always looking for a way to take a break, and by that I mean, engage in some activity which recharges my batteries a little bit, but which simultaneously doesn’t derail me or take up a huge amount of time. As it turns out this combination is difficult to achieve, and by any objective standard it’s one I’ve largely failed at. My breaks either end up taking too long, or completely derailing me from being productive, or draining my energy rather than replenishing it, and most of the time, all three.

One of the few effective ways I’ve discovered to take a break is reading a webcomic. Doing so takes almost no time (unless Randall goes crazy on XKCD), recharges my batteries (particularly on the all important scale of energy per unit of time) and mostly doesn’t derail me. (Though I’m still kind of working on that part.)

All of this is a prelude to talking about a specific webcomic: Existential Comics and I bring it up to illustrate my experience at the Conference of the Mormon Transhumanist Association (MTA) this last Saturday. The choice of this comic works on several levels. First the topic of existential despair came up several times. Second Existential Comics is all about philosophy, a subject which dominated the conference. Finally ,and most importantly, the most recent comic (at least at the time of the conference) seemed to really nail my relationship with the MTA.

The comic features Buddha and David Hume discussing philosophy in a bar. They quickly discover that they both agree that there is no “self”. No transcendental being separate from what’s experienced. After realizing this Buddha asks Hume what should be done as a consequence of this realization. Hume is sure, after being so in sync thus far, that their answers will be the same, and proposes that they respond simultaneously. And, on the count of three, Buddha responds, “Turn away from sensual pleasure and earthly passions into contemplation.” At the same time as Hume says, “Pursue only your sensual pleasure and passions over your reason!”

As it turns out, from the same premise Buddha and Hume reached exactly the opposite conclusion. This is kind of how I feel about the MTA. We both agree that technology has created an entirely new landscape, particularly with respect to religion, but they think it’s revealed how humans can assist in far greater measure with the project of salvation than was previously thought possible. While I think technology has gotten to the point where we can finally understand how truly impossible it is to assist with salvation, and more specifically our progress has revealed to a greater degree, that the limiting factor has always been our morality, not our abilities.

I should mention, at this point, before going any further, that I did enjoy the conference, perhaps because, as opposed to last year, there was more focus on the premise, where we are both in agreement, than on their conclusion, which is where we differ. But it’s also possible I enjoyed it more because I was presenting, which not only led to more interaction, but a greater feeling of control with the whole thing. I should also mention that no one brought up any of my previous criticisms. (I’m still not sure if they ever made the connection.) And everyone was kind and welcoming. And better than all that my presentation was very well received, which covers any number of other issues. (Here it is if you want to view it.) All of this is not to say that I’m going to join the MTA, or that I don’t think they’re making some fairly serious mistakes in their interpretation of LDS Theology, but I came away from this conference more aware of where we agreed, and more convinced that they are aware of some of the issues with their conclusion. I’m sure there are also issues with my conclusion. Finally, if any MTA members read this and feel that I’ve misrepresented them, they should definitely point that out. Since the remainder of this post will be commentary on the conference. I’ve made no secret of the fact that one of the big reasons I attend the MTA Conference is to get material for my blog, and I should use that material while it’s fresh.

(For those uninterested in Mormonism in general and the MTA in particular, I will return with something of more general interest next week.)

The first two presentations of the day actually made me think that I should have waited until after the conference before posting last week’s entry, since both downplayed the importance of  immortality, which I had made a major point of. That said, it is in their affirmation, and later presentations (including a keynote from the CEO of a longevity company) convinced me I wasn’t that far off. But, in any event, both of the initial presentations contributed to the positive feelings I mentioned above.

Turning our focus to just the first presentation, the speaker made the point that there is a danger in a purely technological approach to the world. That technology has a tendency to reduce everything down to a tool, and that it kills society and by extension humanity through this dissection. He mentioned the idea of turning humans into instruments and then referenced Robin Hanson’s keynote from last year’s conference where Hanson discussed his book the Age of Em. I suppose this is another illustration of humility, that you would use a presentation from last year as your prime example of what you don’t want to happen. And this is another thing I agree on, the world described in the Age of Em sounds kind of awful. But it does represent one of the potential logical end point of this instrumentalization he was talking about. Having said that what’s the solution?

The speaker’s solution was to complement technology with religion or at least a religious approach. Which is essentially another way of stating the philosophy and ideology the MTA and I both share. In other words, I couldn’t agree more, and, once again, this illustrates why I decided to engage with the MTA and even present.

Moving on, the first presentation was good, but it was really the second presentation that made me question whether I was misrepresenting the MTA in the post I had just published the very morning of the conference. The second presentation started out with the inevitability of death, and more than that, the presenter made precisely the point I made. That ethics and immortality were incompatible. That you can’t test something if the test never ends. And then to take it to the next level she illustrated this point with a clip from the Good Place. For those of you familiar with the show it was a clip from Season 2 where Chidi induces existential despair in Michael (Ted Danson) by finally bringing home the possibility of death. Which is what starts Michael on the path to understanding morality and ethics.

From this example she made the point that most of the time we are in either one of these two states. Either we’re like Michael at the end of the clip and we’re completely overwhelmed with how short and pointless life appears to be, or we’re like Michael at the beginning, where we care very little about morality because we think we’re going to live forever. Into this later category she places some members of the Church. Contending that sometimes we don’t care enough about mortal suffering because we think that in the end, it will turn out okay. That when someone is suffering, say from a bad home environment or just from living in a less-developed country, that the average LDS response might be that their suffering will all eventually be to their benefit. Her main example was a gentleman who recently caught on fire while barbecuing, and died after several days of agony, but, she contended, the average Mormon might respond, “it’s fine he’s going to be resurrected.”

Her contention is that these things are not fine, and that transhumanism is a virtuous middle ground between the two extremes. That it is better at solving existential despair than straight humanism and traditional technology and that it’s better at solving the actual suffering encountered in this life than straight religion.

I think this is an interesting argument in favor of transhumanism, but I’m not sure how far I buy it. If someone told me that certain Mormons are too blase about suffering, I wouldn’t argue with them. But I can’t imagine anyone, after being told the story of the man who burned to death, shrugging and saying “it’s fine he’s going to be resurrected.” I can imagine them drawing comfort in the midst of their despair and sadness from their hope that he will be resurrected, but I can’t imagine them declaring that sadness is inappropriate.

Now one could certainly imagine that Mormons might be slightly less inclined to spend resources to minimize harm in this life. Particularly as you get to the further ends of the spectrum, that is large amounts of resources for increasingly smaller harms. For example if we ever conquer aging, such that people are immortal, but can still get into accidents, then you could imagine those immortals becoming increasingly concerned with even the rarest kinds of accidents, and you could further imagine religious people, who believe in an afterlife, not, for example, wanting to spend billions of dollars and millions of man-hours to take the chances of an airplane crashing from 1 in 20 million to 1 in 21 million. Particularly if there are other places to spend that money and time.

All of which is to say that, as with so many things, there are trade-offs, which the presenter does a good job of pointing out, but as I have argued from the beginning, they frequently point out the pros of this tradeoff without giving much thought to the cons. If I may be so bold, I think one of the MTA’s arguments is that they can spend all this time focusing on the technology side of things (the T in the MTA) and be just as good a Mormon as everyone else, if not better (the M in the MTA). And my argument has always been that at a minimum there may be sacrifices they’re unaware of, and that, very likely, this focus creates a warping of the central Mormon Theology into something different. Which takes us to the third presentation.

The title of the third presentation was “Being Christ in Name and Power” and it was largely about how the presenter viewed the role of Jesus. And it was a great example of the ideological warping that, in my opinion, comes from too much focus on the T part of the MTA. The first part of the presentation was dedicated to showing that the title of “christ” was used a lot more broadly than just to refer to Jesus. That since “christ” basically means “one who is anointed” that historically Jewish kings and Jewish priests and even Cyrus the Great all got the title of “christ”. From this he basically comes to the conclusion that Jesus was just one of many “christs”, and that we should aspire to be one of the “christs” in the same way that the ancient kings and priests and even Cyrus the Great were.

Leaving aside, for the moment, whether this is an accurate reflection of the gospel, let’s examine why the presenter might have come to this conclusion. Transhumanism involves taking a lot of the things that most Mormons (and for that matter most Christians) expect to be done by God and Jesus and instead doing them ourselves. If these things are reserved to God and Jesus than taking them on is, at best, misguided at best and, at worst, heretical. On the other hand, if we’re all supposed to be “christs”, if we’re all anointed to take on the work of salvation, up to and including, figuring out how to resurrect people( and possibly even beyond that) then taking on all of the transhumanist projects is enlightened and orthodox, more orthodox even than the main body of the church.

When speaking of the presenter’s ideas, it’s possible that his very broad reading of who could be a “christ” came first, and that his interest in transhumanism came second, but I suspect it’s the reverse. That he started with the transhumanism and then broadened his interpretation of LDS scripture looking for ways to justify the things he was already inclined to do. I suspect it’s this way because it’s a very rare human who doesn’t work this way. For my own part, I come to this discussion with a more pessimistic view and the way I read the scriptures is going to be similarly biased by that. And you, dear reader, are going to have to account for my declared bias and the presenter’s suspected bias, and decide for yourself which interpretation makes sense.

With that out of the way we’re finally prepared to examine the merits of his argument. Which, as I said, begins with a broadening of the definition of “christ” and from there proceeds to the conclusion that we should be a “christ”. He suggests that we should do this in two different ways: that we should be Christ in name and that we should be Christ in power. He then gives several examples of being Christ in name which I mostly have no objection to. And then he gives several examples of being Christ in power. He says we need to love, console, forgive, feed the hungry, clothe the naked, heal the sick, and raise the dead. This is an excellent list, and, really, I think the only aspect I disagree with, is how much of a role technology should play. But let’s start by looking at where we agree.

I don’t think anyone argues with the idea that we should become more Christ-like, that we should try to love as he loved, console as he would console, and forgive exactly as he would forgive. Further, everyone seems to agree that part of taking on his name consists of joining his Church, which carries his name, and which is where we are reminded of his commandments. Successfully doing all of these things comes from choosing a Christian morality and following a definite set of ethics. In short from choosing to be righteous.

In other words, being the kind of person who can be as forgiving as Jesus (which is super tough by the way) is, as far as I can tell, 100% about our morality and 0% about the technology we possess. Even healing the sick, if you believe at all in the power of prayer and priesthood blessings, comes about through the power of faith, with a definite dependence on our righteousness. It’s really only the last item on the list, raising the dead, where suddenly, at least according to the transhumanists, faith and morality aren’t enough, where you really want to bring technology into play. (I would contend that you can raise the dead with enough faith, merely that it requires a level of faith and righteousness which is exceedingly rare.)

To be fair, the presenter mentioned feeding the hungry and clothing the naked as examples of technology, but we don’t produce food because we intend to feed the hungry, we produce food to feed ourselves and make money (queue Adam Smith). It’s morality not technology that leads us to share it with the hungry. Meaning our list is all things we can accomplish almost entirely by just being more righteous, until we arrive at the last item. Where we transition from acts where morality is the critical component, to something where you don’t need much morality, but you need an awful lot of liquid nitrogen.

All of this stems from the idea that we are all christs of a sort, and to a certain extent, I, and most people in the church, I imagine, agree with this. But when that extends to ditching morality for (or even hitching morality to) technology, that’s where the disagreement starts. And my discomfort only increases when you combine all this with downplaying the role of Jesus. Which takes things even farther away from, what I believe to be, foundational LDS doctrine. As an example of what I mean consider this quote from Joseph Smith:

The fundamental principles of our religion are the testimony of the Apostles and Prophets, concerning Jesus Christ, that He died, was buried, and rose again the third day, and ascended into heaven; and all other things which pertain to our religion are only appendages to it.

And there are hundreds of similar quotes from other LDS Church leaders. Including Jesus himself saying:

I am the way, the truth, and the life: no man cometh unto the Father, but by me.

I’m sure the MTA has a way of fitting all of this into their favored interpretation, but I think the plain reading (and coincidently, my presentation on AI) indicates that Jesus is critical to the entire thing, that his status as Christ is a difference not merely of degree, but also of kind from what we are being asked to do. Particularly when it comes to the atonement. The importance of which the presenter went out of his way to minimize as well. Mentioning, that it was only used in the New Testament once to refer to Jesus’ atonement and that every other time (69 in total) that it was talking about “other christs performing other atonements.”

There are people who believe that Jesus has done everything and that there is nothing left for us to do. People who believe that salvation is entirely through the grace of God, and that our works have nothing to do with it. I don’t claim to be any expert on this branch of Christianity (is it fair to classify everyone that’s in this category as a Calvinist?) But insofar as my assessment is correct (and I know there are nuances that mostly get missed) I certainly have my objections to this ideology, and if that’s the ideology the MTA was objecting to I’d be totally on board, but instead the MTA appears to want to take the opposite side of that dichotomy, and rather than the grace of God doing everything they want the technology of humans to do everything.

This is the spot where in previous discussions, someone from the MTA will show up and quote this part of their affirmation:

We believe that scientific knowledge and technological power are among the means ordained of God to enable such exaltation… (emphasis mine)

And, yes to be fair, they probably don’t expect humans to do everything, but they certainly appear to want humanity to do more than any other christian religion, so maybe they’re not all the way to the edge, but they have claimed the territory which lies closest to it.

My original intention was to devote only part of this post to the conference, and here I am almost out of space and I’ve only covered the first three presentations. Accordingly I’ll skip to the end and briefly cover the last presentation because in some respects it illustrates the danger of the entire project and indeed of most such projects.

This final presentation was a criticism of the how God behaved historically, specifically in the Old Testament and the Book of Mormon. But also bringing in the Hindu gods and the Hellenic gods. From this, concluding that he wanted to “design” a “good” God. He contended that this design was particularly important because we definitely end up with gods one way or the other, and recently we have replaced the cruel and warlike gods of history with gods of distraction and consumerism.

The presentation called on people to worship better gods, gods who aren’t racist, gods who don’t “privilege male over female”, gods who have “compassion over bloodlust”. And I understand the appeal of this, and I have all the sympathy in the world for the presenter’s personal faith crisis which led him to this point, but either God exists or he doesn’t, and if he does we need to figure out what he wants from us, and do it. And any attempt (of which the final presentation is just the most blatant) to impose our own values on God is just delusional hubris.

I’m sure there’s some of that delusional hubris in me as well (and of course if there is no God the entire conference was delusional), and it’s certainly possible that I’m wrong about everything (and not merely in this post.) But I would end by urging everyone to focus more on righteousness than technology, and more on understanding what God wants of us than in making demands of him.

I never make demands, I prefer to cajole, wheedle and beg. Speaking of which, have you considered donating?

Further Lessons in Comparing AI Risk and the Plan of Salvation

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

On the same day as this post goes live, I’ll be at the annual conference of the Mormon Transhumanist Association (MTA). You may remember my review of last year’s MTA Conference. This year I’m actually one of the presenters. I suspect that they may not have read last year’s review (or any of my other critical articles) or they may have just not made the connection. But also, to their credit, they’re very accepting of all manner of views even critical ones, so perhaps they know exactly who I am. I don’t know, I never got around to asking.

The presentation I’m giving is on the connection between AI Risk and the LDS Plan of Salvation. Subjects I covered extensively in several past posts. I don’t think the presentation adds much to what I already said in those previous posts, so there wouldn’t be much point in including it here. (If you’re really interested email me and I’ll send you the Google slide deck.) However my presentation does directly tie into some of the reservations I have about the MTA and so, given that perhaps a few of them will be interested enough in my presentation to come here and check things out, I thought this would be a good opportunity to extend what I said in the presentation and look at what sort of conclusions might follow if we assume that life is best viewed as similar to the process for reducing AI Risk.

As I mentioned I covered the initial subject (the one I presented on today) at some length already, but for those who need a quick reminder or who are just joining us. Here’s what you need to know:

1- We may be on the verge of creating an artificial superintelligence.

2- By virtue its extreme intelligence, this AI would have god-like power.

3- Accordingly we have to ensure that the superintelligence will be moral. i.e. not destroy us.

Mormons believe that this life is just such a test of “intelligences”. A test of their morality in preparation for eventually receiving god-like powers. Though I think I’m the first to explicitly point out the similarities between AI Risk and the LDS Plan of Salvation. Having made that connection, my argument is that many things previously considered strong arguments against, or problems with, religion (eg suffering, evil, Hell, etc.) end up being essential components on the path to trusting something with god-like power. Considering these problems in this new light was the primary subject of the presentation I gave today. The point of this post is to go farther, and consider what further conclusions we might be able to draw from this comparison, particularly as it relates to the project of Mormon Transhumanism.

Of course everything I say going forward is going to be premised on accepting the LDS Plan of Salvation (more accurately, my specific interpretation of it) and the connections I’m drawing between it and AI Risk. Which I assume many are not inclined to do, but if you could set your reservations aside for the moment I think there’s some interesting intellectual territory to cover.

All of my thinking proceeds from the idea that one of the methods you’re going to try as an Artificial Intelligence Researcher (AIR) is isolating your AI. Limiting the damage a functionally amoral superintelligence can cause by cutting it off from its ability to cause that harm, at least in the real world.

(Now of course many people have argued that it may be difficult to keep an AI in a box so to speak, but if the AIR is God and we’re the intelligences, presumably that objection goes away.)

It’s easy to get fixated on this isolation, but the isolation is a means to an end not an end in itself. It’s not necessary for its own sake, it’s necessary because we assume that the AI already has god-like intelligence, and we’re trying to keep it from having a god-like impact until it has god-like morals. Accordingly we have three pieces to the puzzle:

1- Intelligence

2- Morals

3- Impact

What happens when we consider those three attributes with respect to humans? It’s immediately obvious from the evidence that we’re way out ahead on 3. That humanity has already made significant strides towards having the ability to create a god-like impact, without much evidence that we have made similar strides with attributes 1 and 2. The greatest example of that is nuclear weapons. Trump and Putin could separately or together have a god-like impact on the world. And the morality of it would be the opposite of god-like and the intelligence of the action would not be far behind.

Now I would assume that God isn’t necessarily worried about any of the things we worry about when it comes to superintelligence. But if there was going to be a concern (perhaps even just amongst ourselves) it would be the same as the concerns of the AIRs, that we end up causing god-like impacts before we have god-like morals. Meaning that the three attributes are not all equally important. I believe any objective survey of LDS scripture, prophetic counsel or general conference talks would conclude that the overwhelming focus of the church is on item 2, morals. If you dig a little deeper you can also find writings about the importance of intelligence, but I think you’ll find very little related to having a god-like impact.

I suspect at this point, I need to spell out what I mean by that phrase. I’ve already given the example of nuclear war. To that there are a whole host of environmental effects I could add, on the negative side of things. On the positive side you have the green revolution, the internet, skyscrapers, rising standards of living, etc. Looking towards the future we can add immortality, brain-uploading, space colonization, and potentially AI, though that could go either way.

All of these are large scale impacts, and that’s the kind of thing I’m talking about. Things historians could be discussing in hundreds of years. LDS/Mormon doctrine does not offer much encouragement in favor of making these sorts of impacts. In fact, if anything, it comes across as much more personal and dispenses advice about what we should do if someone sues us for our cloak, or the benefits of saving even one soul, or what we should do if we come across someone who has been left half dead by robbers. All exhortations which apply to individual interactions. There’s essentially nothing about changing the world on a large scale through technology, and arguably what advice is given, is strongly against it. Of course, as you can probably guess I’m talking about the Tower of Babel. I did a whole post on the idea that the Tower of Babel did apply to the MTA, so I won’t rehash it here, but the point of all of this is that I get the definite sense that the MTA has prioritized the impact piece of the equation for godhood to the detriment of the morality piece, which for an AIR monitoring the progress of a given intelligence ends up being precisely the sort of thing you would want to guard against.

As an example of what I’m talking about consider the issue of immortality. Something that is high on the Transhumanist list as well as the Mormon Transhumanist list. Now to be clear all Mormons believe in eventually immortality, it’s just that most of them believe you have to die first and then come back. The MTA hopes to eliminate the “dying first” part. This is a laudable goal, and one that would have an enormous impact, but that’s precisely the point I was getting at above, allowing god-like impacts before you have god-like morality is the thing we’re trying to guard against in this model. Also “death” appears to have a very clear role in this scenario, insofar as tests have to end at some point. If you’re an AIR this is important if only for entirely mundane reasons like scheduling, having limited resources and most of all having a clear decision point. But I assume you would also be worried that the longer a “bad” AI has to explore its isolation the more likely it is to be able to escape. Finally, and perhaps most important for our purposes, there’s significant reason to believe that morality becomes less meaningful if you allow an infinite time for it to play out.

If this were just me speculating on the basis of the analogy, you might think that such concerns are pointless, or that they don’t apply we replace our AIR with God. But it turns out that something very similar is described in the Book of Mormon, in Alma chapter 42. The entire chapter speaks to this point, and it’s probably worth reading in its entirety, but here is the part which speaks most directly to the subject of immortality.

…lest he should put forth his hand, and take also of the tree of life, and eat and live forever, the Lord God placed cherubim and the flaming sword, that he should not partake of the fruit—

And thus we see, that there was a time granted unto man to repent, yea, a probationary time, a time to repent and serve God.

For behold, if Adam had put forth his hand immediately, and partaken of the tree of life, he would have lived forever, according to the word of God, having no space for repentance; yea, and also the word of God would have been void, and the great plan of salvation would have been frustrated.

But behold, it was appointed unto man to die…

At a minimum, one gets the definite sense that death is important. But maybe it’s still not clear why, the key is that phrase “space for repentance”. There needs to be a defined time during which morality is established. Later in the chapter the term “preparatory state” is used a couple of times, also the term “probationary state”. Both phrases point to a test of a specific duration, a test that will definitely determine one way or the other whether an intelligence can be trusted with god-like power. Because while it’s not clear that this necessarily the case with God. With respect to artificial intelligence, once we give them god-like power we can’t take it back. The genie won’t go back in the bottle.

To state it more succinctly, this life is not a home for intelligences, it’s a test of intelligences, and tests have to end.

It is entirely possible that I’m making too much of the issue of immortality. Particularly since true immortality is probably still a long way off, and I wouldn’t want to stand in the way of medical advances which could improve the quality of life. (Though I think there’s a good argument to be made that many recent advances have extended life without improving it.)  Also I think that if death really is a crucial part of God’s Plan, that immortality won’t happen regardless of how many cautions I offer or how much effort the transhumanists put forth. (Keeping in mind the assumptions I mentioned above.)

Of more immediate concern might be the differences in opinion between the MTA and the LDS Leadership, which I’ve covered at some length in those previous posts I mentioned at the beginning. But, to highlight just one issue I spoke about recently, the clear instruction from the church, is that it’s leaders should counsel against elective transexxual surgery, while as far as I can tell (see my review of the post-genderism presentation from last year) the MTA, views “Gender Confirmation Surgery” as one of the ways in which they can achieve the “physical exaltation of individuals and their anatomies” (that’s from their affirmation). Now I understand where they’re coming from. It certainly does seem like the “right” thing to do is to allow people the freedom to choose their gender, and allow gay people the freedom to get married in the temple (another thing the LDS Leadership forbids). But let me turn to another story from the scriptures. This time we’ll go to the Bible.

In the Old Testament there’s a classic story concerning Samuel and King Saul. King Saul is commanded to:

…go and smite Amalek, and utterly destroy all that they have, and spare them not; but slay both man and woman, infant and suckling, ox and sheep, camel and ass.

But rather than destroying everything Saul:

spared…the best of the sheep, and of the oxen, and of the fatlings, and the lambs, and all that was good, and would not utterly destroy them: but every thing that was vile and refuse, that they destroyed utterly.

He does this because he figures that God will forgive him for disobeying, once he sacrifices all of the fatlings and lambs, etc. But in fact this act is where God decides that making Saul the King was a mistake. And when Samuel finally shows up he tells the King:

And Samuel said, Hath the Lord as great delight in burnt offerings and sacrifices, as in obeying the voice of the Lord? Behold, to obey is better than sacrifice, and to hearken than the fat of rams.

I feel like this Biblical verse might be profitably placed in a very visible location in all AIR offices. Because when it comes down to it, no matter how good the AI is (or thinks it is) or how clever it ends up being. In the end the most important thing might be that if you tell the AI to absolutely never do X, you want it to absolutely never do X.

You could certainly imagine an AI pulling a “King Saul”. Perhaps if we told it to solve global warming it might decide to trigger massive volcanic eruptions. Or if we told it to solve the population problem, we could end up with a situation that Malthus would have approved of, but which the rest of the world finds abhorrent. Even if, in the long run, the AI assures us that the math works out. And it’s likely that our demands on these issues would seem irrational to the AI, even evil. But for good or for ill, humanity definitely has some values which should supersede behaviors which the AI might otherwise be naturally inclined to adopt, or which, through its own reasoning, it might conclude is the moral choice. If we can accept that this is a possibility with potential superintelligences, how much more could it be the case when we consider the commandments of God? Who is a lot more intelligent and moral than we are.

If we accept the parallel, then we should accept, exactly this possibility, that something similar might be happening with God. That there may be things we are being commanded not to do, but which seem irrational or even evil. Possibly this is because we are working from a very limited perspective. But it’s also possible that we have been given certain commandments which are irrational, or perhaps just silly, and it’s not our morality or intelligence being tested, but our obedience. As I just pointed out, a certain level of blind obedience is probably an attribute we want our superintelligence to have. The same situation may exist with respect to God. And it is clear that obedience, above and beyond everything I’ve said here, is an important topic in religion. The LDS Topical Guide lists 120 scriptures under that heading, and cross-references an additional 25 closely related topics, which also probably have a similar number of scriptures attached.

Here at last we return to the idea I started this section with. I know there are many things which seem like good ideas. They are rational, and compassionate, and exactly the sort of thing it seems like we should be doing. I mentioned as one example supporting people in “gender confirmation surgery” and the example of pressing for gay marriages to be solemnized in the temple. But we can also see if we look at AI Risk, in connection with the story of King Saul, maybe this is a test of our obedience? I am not wise enough to say whether it is or not, and everyone has to chart their own path, and listen to their own conscience and do the best they have with what they’ve got. But I will say that I don’t think it’s unreasonable to draw the conclusion from this comparison that tests of obedience are something we should expect, and that they may not always make as much sense as we would like.

At this point, it’s 6 am the morning of the conference where I’ll be presenting, which basically means that I’m out of time. There were some other ideas I wanted to cover, but I suppose they’ll have to wait for another time.

I’d like to end by relating an analogy I’ve used before. One which has been particularly clarifying for me when thinking about the issues of transhumanism and where our efforts should be spent.

Imagine that you’re 15 and the time has come to start preparing to get your driver’s license. Now you just happen to have access to an autoshop. Most people (now and throughout history) have not had access to such an autoshop. But with that access you think you might be able to build your own car. Now maybe you can and maybe you can’t. Building a car from scratch is probably a lot harder than you think. But if, by some miracle, you are able to build a car, does that also give you the qualifications to drive it? Does building a car give you knowledge of the rules (morality) necessary to safely drive it? No. Studying for and passing the driver’s license test, is what (hopefully) gives you that. And while I don’t think it’s bad to study auto mechanics at the same time as studying for your driver’s license test, the one may distract from the other. Particularly if you’re trying to build an entire car, which is a very time consuming process.

God has an amazing car waiting for us. Much better than what we could build ourselves, and I think he’s less interested in having us prove we can build our own car then in showing that we’re responsible enough to drive safely.

I really was honored to be invited to present at the MTA conference, and I hope I have not generated any hard feelings with what I’ve written, either now or in the past. Of course, one way to show there are no hard feelings is to donate.

That may have crossed a line. I’m guessing it’s possible with that naked cash grab that if there weren’t any hard feelings that there are now.