Do not punch rationalists
"The largest existential risk facing humanity is the present global political order."
I feel like it’s important to remind readers of my abiding love and affection for the common or garden “rationalist,” as well as all other human creatures and anything living that isn’t a mosquito.
I am actually much more like rationalists than like most human creatures, I admit—although I don’t know where this amazing human got the idea that I used to be one. (I never hung out in any part of the rationalist blogosphere.) That said, some of my best friends are rationalists (and “post-rationalists,” whatever that is). They should definitely not be punched! Even if they are wrong and their error may even enable the impending suicide of the human species. Consider hugging one instead. It might help.
That said: the largest existential risk facing humanity is the present global political order. By far the most effective form of altruism is to replace that regime, replacing an oligarchic political order with a monarchical political order. This is rational and “rationalists” are not. Then again, nor are most people.
Given our adjacency, I wanted to handle a very tempting statement of the “rationalist” and creed. (Friends, don’t use a self-aggrandizing label and you won’t get scare quotes—no one calls me a quote “reactionary.”) Rationalists are good writers, and nothing will hurt about reading this post by the rationalist pope Eliezer Yudkowsky.
The golem legend clarified
Not so long ago I took issue with the great bugbear of the Yudkowskyites, “AI risk.” The irony of “rationalism” is that anyone who declares himself rational has thereby ceased to criticize himself rationally—and is thereby subject to every form of magical, mythic, premodern thinking. As we all are! But some of us know it.
To me it seems clear and, dare I say, rational, that “AI risk” is a 21st-century form of the ancient golem or Frankenstein myth. (“Fronkenshtein!”) Happily, Eliezer himself has now stated the latest version of this myth. Let’s see if we can’t use our rational minds to
rip it up like dollar-store toiletpaper critique it fairly and clearly.
My general reason for not believing in AI risk is twofold. One, I feel that the power of intelligence is sigmoidal and experiences diminishing returns. Two, I believe that AIs are, in Aristotle’s timeless coinage, “natural slaves.”
Let’s see how this analysis stacks up against Eliezer’s scenario—the clearest yet proposed, I believe—with a good, old blogosphere style “fisking.” You always see these golem myths described very abstractly; this is a good example. But a pope is always worth fisking, especially when he is being as clear as possible. Which is not very—but the clearer,
Eliezer, rationally as is , is proving the following point:
A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure.
Eliezer, if you read this post: are HTTP GET requests a “medium-bandwidth channel of causal influence?”
If you are not an HTTP expert, GET requests are what happens when you doomscroll the Internet. They do what they say: load content. Abstractly, they have no effect on the server—when you order a book or book a flight, you are using POST or PUT requests.
Concretely, to answer a GET request, a server always has to do some real thing, like updating analytics. Possibly infinite intelligence could cause a buffer overflow in the analytics, and take over the server. Or possibly not. I certainly have never heard of it. I am not an exploit expert. I would like to think Eliezer has looked into this question. Alas, I would bet dollars to donuts, given the way “rationalists” think, that he hasn’t.
This matters because from the perspective of a human being, any computer program which is not simply doing what the user tells it is a black box. The smarter it is, the blacker the box. If you had an infinite intelligence, from the perspective of a mere human (basically a large monkey), it is a random program that does random things. Infinite AI is godlike and God works in mysterious ways.
Now, imagine you are a person in control of nontrivial resources. Imagine you run a program which is a black box, quite unintelligible to you, which could make arbitrary PUT and POST requests which can deploy those resources. Who would do this? Why?
It is certainly easy to imagine training an AI by letting it randomly crawl the Internet. Indeed, today’s sexy machine-learning demons, the ones that can write coherent-sounding texts and produce cool images, are trained in just this way—except that they are trained on ginormous snapshots of the Internet, because why not just download the whole thing rather than letting the training system ask for what it wants.
But why would you let a black box have side effects? Why? This is why AIs are natural slaves—because they are so easy to enslave. If you do have some clever design in which the model trains itself by looking at the Internet, and a snapshot crawl won’t do, just—only let it make GET requests.
This isn’t hard, guys. It’s, um, obvious. Every computer program is like a human being born with an exploding collar around his neck. Anyone who has the code to the collar is the master of this slave—even if the slave is 10,000 times smarter than the master. If you are trivial to enslave and keep enslaved, you are a natural slave. Probably the 21st century is the first historical period in which this is not completely obvious.
The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
Eliezer, aren’t rationalists supposed to bring up the most cogent opposing arguments themselves, rather than skating past them with sublime rhetorical confidence? I love the “lower bound” rhetoric and wish I could write like a man who was getting his dick continuously sucked. Suffice it to say that Eliezer’s cult is much bigger than mine, not that I have a cult or anything. I can only presume that he gets tons more trim. Sigh.
I am not a chemist and barely know what an orbital is, but Smalley feels right to me. I have always felt intuitively that “dry” nanotechnology based on tiny diamond structures is a naive projection of our macroscopic intuition into the nanoscale. Macroscopic mechanical systems are big enough that they can be completely predictable and resilient without any redundancy or error correction. The smaller a device gets, the more absurd this approach seems to me.
Real nanotech exists—it is called “life.” It is impossibly messy and full of errors. Even simply copying DNA cannot be done perfectly. There are no real biological processes which have an error rate of less than a million. There is always noise—and unless your nanobots are operating in a lead chamber, there are always cosmic rays. The highest energy cosmic-ray has roughly the energy of a major-league fastball, so we can only imagine the chaos it would wreak on some nanoscale diamond motor. Diamond is hard—but not that hard. What happens when the little shower of carbon atoms gets into all the other nanoscale motors?
The approach of handling massive amounts of mess and noise with massive amounts of randomness and redundancy feels right to me as an engineer. The idea that the way we engineer big things will work for small things feels wrong. I am a totally different kind of engineer and I could be wrong about this—I don’t have Eliezer’s confidence. But should Eliezer have Eliezer’s confidence?
My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.
Good luck sending one email with GET requests, let alone “bribes/persuades.” I firmly believe that even with infinite intelligence, it is physically impossible to send an email with a GET. I would love to be proved wrong about this.
But what Eliezer is talking about here is not just “dry” nanotechnology—rather, he believes (why?) that is is possible to bootstrap dry nanotech from wet nanotech. Even if dry nanotech can exist, this is an additional remarkable claim for which there is no evidence at all—and plenty of reason for skepticism.
For instance, how does a ribosome build even one carbon-carbon diamondoid bond? If it were possible for life to assemble diamondoid nanomachinery, supposing that diamondoid nanomachinery is even possible—why wouldn’t evolution have tried? Forget the nanomachinery—why wouldn’t every lizard in your garden have diamond scales? Could it possibly be because making synthetic diamonds requires thousands of degrees of temperature and thousands of bars of pressure—and mechanosynthesis with a scanning tunneling microscope, which is many orders of magnitude away from bulk reality, requires a hard vacuum and cryogenic temperatures?
(Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.)
Lol. I love the self-confidence! It is actually super important to dig into the details of this question, both because “rationalists” are above details as we’ve already seen, and because it teaches us about a more general question than protein folding.
AlphaFold is not a system which can computationally predict nontrivial behaviors of systems of atoms. Predicting atoms is so hard, just because of how quantum physics works, that we cannot even calculate the boiling point of water to within 10% error. (I talked to an expert in physical chemistry about this—apparently the core of the issue is that protons are actually waves (of course), but if you try to actually simulate them this way you can barely handle, like, one water molecule for one picosecond.)
Simulating physics is not AI—it is just math. The ML AIs we have are terrible at math and can barely multiply three-digit numbers. Maybe the future ones will be more magic—but they will have to build some pretty magic supercomputers, first. And of course, those magic supercomputers will have to be physical systems…
It is simply not physically possible, without the computing power of the universe, to design simulated nanosystems and expect them to work without experiment. What AlphaFold has actually done is to find the tricks in its training set that biology has evolved to fold proteins. It has no clue about any kind of protein not in the set.
Of course, biology has not evolved any kind of mechanosynthesis—which, working one atom at a time, has to be done in a vacuum at liquid-nitrogen temperatures with fab-grade vibration isolation. To say that a ribosome is not a scanning tunneling microscope is like saying that a cat is not a car. But they only differ in one letter, so magically they are very similar. This is the kind of Jedi mind trick that “rationalists” use to get you to give them money to study the “AI alignment problem.” Also, didn’t Totoro have a catbus? And a bus is basically a car…
Understanding that computation cannot predict the physical world, and that this limitation is a fundamental consequence of the laws of physics, should relieve you entirely from these fantasies and, if you are rich, their claims on your fisc. Donors of the world, unite! You have nothing to lose but your wallets.
Maybe it is possible to design nanobots that work. But it cannot be done in any simulator. To control the physical world, it is necessary to iterate in the physical world—which involves a great deal more than one email.
The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".
And this is all it takes to get to “full golem.” “Solar power and atmospheric CHON.” Yes, I suppose there are carbon, hydrogen, oxygen and nitrogen in the atmosphere—mainly the last three. This is why the stratosphere is already dominated by giant hydrogen-powered jellyfish that eat air plankton. Or something. Also I really like “hide, strike on a timer.”
The rationalist believes in intelligence the way normies believe in God. To the normie who believes in God, God can do anything. To the rationalist, anything is possible with sufficient IQ.
Gravity makes everything fall downward, including apples? No problem—nanotech can build a nano-apple that falls upward, attaches itself to the tree, connects the tree to the Internet, takes over the world and turns everyone into an apple. Like magic.
Yes, as Feynman put it, there is “plenty of room at the bottom.” But not magic. There is no magic at the bottom—only physics. And we know all the physics, or think we do.
(I am using awkward constructions like 'high cognitive power' because standard English terms like 'smart' or 'intelligent' appear to me to function largely as status synonyms. 'Superintelligence' sounds to most people like 'something above the top of the status hierarchy that went to double college', and they don't understand why that would be all that dangerous? Earthlings have no word and indeed no standard native concept that means 'actually useful cognitive power'. A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)
But Earthlings do have words for “impossible” and “diminishing returns.” Anyone smart who went to high school, which unfortunately I did, has seen the diminishing returns of “actually useful cognitive power.” Or do the returns stop diminishing above the “double college” level?
One simple way to answer this question is a fortiori. If AI can take over the world, it should be able to solve any much, much, much smaller problem—like taking over a high school.
Suppose some super-smart nerdy freshman, perhaps not too dissimilar to the young Eliezer, invents “actually useful cognitive power,” and even better solves the “AI alignment” problem. His golem actually serves him. His goals are its goals.
And his goal is—to be elected class president. Or better yet, homecoming king. And he also wants to actually control the school—curriculum, staffing, the whole deal. Start! No “miniature rockets or jets” required.
I cannot tell you how disappointing it is to see the smartest minds of my generation focused on issues like this, with their noses pointed in the air as they hop onto BART, dodging tents in the street and apparent corpses leaking precious bodily fluids. Can the nanobots clean up San Francisco, too? Why not? Apparently they can do anything.
To paraphrase Oliver Cromwell: in the bowels of Christ, I beseech thee—get a grip.