Do not punch rationalists
"The largest existential risk facing humanity is the present global political order."
I feel like it’s important to remind readers of my abiding love and affection for the common or garden “rationalist,” as well as all other human creatures and anything living that isn’t a mosquito.
I am actually much more like rationalists than like most human creatures, I admit—although I don’t know where this amazing human got the idea that I used to be one. (I never hung out in any part of the rationalist blogosphere.) That said, some of my best friends are rationalists (and “post-rationalists,” whatever that is). They should definitely not be punched! Even if they are wrong and their error may even enable the impending suicide of the human species. Consider hugging one instead. It might help.
That said: the largest existential risk facing humanity is the present global political order. By far the most effective form of altruism is to replace that regime, replacing an oligarchic political order with a monarchical political order. This is rational and “rationalists” are not. Then again, nor are most people.
Given our adjacency, I wanted to handle a very tempting statement of the “rationalist” and creed. (Friends, don’t use a self-aggrandizing label and you won’t get scare quotes—no one calls me a quote “reactionary.”) Rationalists are good writers, and nothing will hurt about reading this post by the rationalist pope Eliezer Yudkowsky.
The golem legend clarified
Not so long ago I took issue with the great bugbear of the Yudkowskyites, “AI risk.” The irony of “rationalism” is that anyone who declares himself rational has thereby ceased to criticize himself rationally—and is thereby subject to every form of magical, mythic, premodern thinking. As we all are! But some of us know it.
To me it seems clear and, dare I say, rational, that “AI risk” is a 21st-century form of the ancient golem or Frankenstein myth. (“Fronkenshtein!”) Happily, Eliezer himself has now stated the latest version of this myth. Let’s see if we can’t use our rational minds to rip it up like dollar-store toiletpaper critique it fairly and clearly.
My general reason for not believing in AI risk is twofold. One, I feel that the power of intelligence is sigmoidal and experiences diminishing returns. Two, I believe that AIs are, in Aristotle’s timeless coinage, “natural slaves.”
Let’s see how this analysis stacks up against Eliezer’s scenario—the clearest yet proposed, I believe—with a good, old blogosphere style “fisking.” You always see these golem myths described very abstractly; this is a good example. But a pope is always worth fisking, especially when he is being as clear as possible. Which is not very—but the clearer,
Eliezer, rationally as is , is proving the following point:
A cognitive system with sufficiently high cognitive powers, given any medium-bandwidth channel of causal influence, will not find it difficult to bootstrap to overpowering capabilities independent of human infrastructure.
Eliezer, if you read this post: are HTTP GET requests a “medium-bandwidth channel of causal influence?”
If you are not an HTTP expert, GET requests are what happens when you doomscroll the Internet. They do what they say: load content. Abstractly, they have no effect on the server—when you order a book or book a flight, you are using POST or PUT requests.
Concretely, to answer a GET request, a server always has to do some real thing, like updating analytics. Possibly infinite intelligence could cause a buffer overflow in the analytics, and take over the server. Or possibly not. I certainly have never heard of it. I am not an exploit expert. I would like to think Eliezer has looked into this question. Alas, I would bet dollars to donuts, given the way “rationalists” think, that he hasn’t.
This matters because from the perspective of a human being, any computer program which is not simply doing what the user tells it is a black box. The smarter it is, the blacker the box. If you had an infinite intelligence, from the perspective of a mere human (basically a large monkey), it is a random program that does random things. Infinite AI is godlike and God works in mysterious ways.
Now, imagine you are a person in control of nontrivial resources. Imagine you run a program which is a black box, quite unintelligible to you, which could make arbitrary PUT and POST requests which can deploy those resources. Who would do this? Why?
It is certainly easy to imagine training an AI by letting it randomly crawl the Internet. Indeed, today’s sexy machine-learning demons, the ones that can write coherent-sounding texts and produce cool images, are trained in just this way—except that they are trained on ginormous snapshots of the Internet, because why not just download the whole thing rather than letting the training system ask for what it wants.
But why would you let a black box have side effects? Why? This is why AIs are natural slaves—because they are so easy to enslave. If you do have some clever design in which the model trains itself by looking at the Internet, and a snapshot crawl won’t do, just—only let it make GET requests.
This isn’t hard, guys. It’s, um, obvious. Every computer program is like a human being born with an exploding collar around his neck. Anyone who has the code to the collar is the master of this slave—even if the slave is 10,000 times smarter than the master. If you are trivial to enslave and keep enslaved, you are a natural slave. Probably the 21st century is the first historical period in which this is not completely obvious.
The concrete example I usually use here is nanotech, because there's been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
[Citation needed]. Actually, my skepticism of diamondoid nanotechnology is, uh, shared by people who know a lot more about atoms than I or Eliezer Yudkowsky.
Eliezer, aren’t rationalists supposed to bring up the most cogent opposing arguments themselves, rather than skating past them with sublime rhetorical confidence? I love the “lower bound” rhetoric and wish I could write like a man who was getting his dick continuously sucked. Suffice it to say that Eliezer’s cult is much bigger than mine, not that I have a cult or anything. I can only presume that he gets tons more trim. Sigh.
I am not a chemist and barely know what an orbital is, but Smalley feels right to me. I have always felt intuitively that “dry” nanotechnology based on tiny diamond structures is a naive projection of our macroscopic intuition into the nanoscale. Macroscopic mechanical systems are big enough that they can be completely predictable and resilient without any redundancy or error correction. The smaller a device gets, the more absurd this approach seems to me.
Real nanotech exists—it is called “life.” It is impossibly messy and full of errors. Even simply copying DNA cannot be done perfectly. There are no real biological processes which have an error rate of less than a million. There is always noise—and unless your nanobots are operating in a lead chamber, there are always cosmic rays. The highest energy cosmic-ray has roughly the energy of a major-league fastball, so we can only imagine the chaos it would wreak on some nanoscale diamond motor. Diamond is hard—but not that hard. What happens when the little shower of carbon atoms gets into all the other nanoscale motors?
The approach of handling massive amounts of mess and noise with massive amounts of randomness and redundancy feels right to me as an engineer. The idea that the way we engineer big things will work for small things feels wrong. I am a totally different kind of engineer and I could be wrong about this—I don’t have Eliezer’s confidence. But should Eliezer have Eliezer’s confidence?
My lower-bound model of "how a sufficiently powerful intelligence would kill everyone, if it didn't want to not do that" is that it gets access to the Internet, emails some DNA sequences to any of the many many online firms that will take a DNA sequence in the email and ship you back proteins, and bribes/persuades some human who has no idea they're dealing with an AGI to mix proteins in a beaker, which then form a first-stage nanofactory which can build the actual nanomachinery.
Good luck sending one email with GET requests, let alone “bribes/persuades.” I firmly believe that even with infinite intelligence, it is physically impossible to send an email with a GET. I would love to be proved wrong about this.
But what Eliezer is talking about here is not just “dry” nanotechnology—rather, he believes (why?) that is is possible to bootstrap dry nanotech from wet nanotech. Even if dry nanotech can exist, this is an additional remarkable claim for which there is no evidence at all—and plenty of reason for skepticism.
For instance, how does a ribosome build even one carbon-carbon diamondoid bond? If it were possible for life to assemble diamondoid nanomachinery, supposing that diamondoid nanomachinery is even possible—why wouldn’t evolution have tried? Forget the nanomachinery—why wouldn’t every lizard in your garden have diamond scales? Could it possibly be because making synthetic diamonds requires thousands of degrees of temperature and thousands of bars of pressure—and mechanosynthesis with a scanning tunneling microscope, which is many orders of magnitude away from bulk reality, requires a hard vacuum and cryogenic temperatures?
(Back when I was first deploying this visualization, the wise-sounding critics said "Ah, but how do you know even a superintelligence could solve the protein folding problem, if it didn't already have planet-sized supercomputers?" but one hears less of this after the advent of AlphaFold 2, for some odd reason.)
Lol. I love the self-confidence! It is actually super important to dig into the details of this question, both because “rationalists” are above details as we’ve already seen, and because it teaches us about a more general question than protein folding.
AlphaFold is not a system which can computationally predict nontrivial behaviors of systems of atoms. Predicting atoms is so hard, just because of how quantum physics works, that we cannot even calculate the boiling point of water to within 10% error. (I talked to an expert in physical chemistry about this—apparently the core of the issue is that protons are actually waves (of course), but if you try to actually simulate them this way you can barely handle, like, one water molecule for one picosecond.)
Simulating physics is not AI—it is just math. The ML AIs we have are terrible at math and can barely multiply three-digit numbers. Maybe the future ones will be more magic—but they will have to build some pretty magic supercomputers, first. And of course, those magic supercomputers will have to be physical systems…
It is simply not physically possible, without the computing power of the universe, to design simulated nanosystems and expect them to work without experiment. What AlphaFold has actually done is to find the tricks in its training set that biology has evolved to fold proteins. It has no clue about any kind of protein not in the set.
Of course, biology has not evolved any kind of mechanosynthesis—which, working one atom at a time, has to be done in a vacuum at liquid-nitrogen temperatures with fab-grade vibration isolation. To say that a ribosome is not a scanning tunneling microscope is like saying that a cat is not a car. But they only differ in one letter, so magically they are very similar. This is the kind of Jedi mind trick that “rationalists” use to get you to give them money to study the “AI alignment problem.” Also, didn’t Totoro have a catbus? And a bus is basically a car…
Understanding that computation cannot predict the physical world, and that this limitation is a fundamental consequence of the laws of physics, should relieve you entirely from these fantasies and, if you are rich, their claims on your fisc. Donors of the world, unite! You have nothing to lose but your wallets.
Maybe it is possible to design nanobots that work. But it cannot be done in any simulator. To control the physical world, it is necessary to iterate in the physical world—which involves a great deal more than one email.
The nanomachinery builds diamondoid bacteria, that replicate with solar power and atmospheric CHON, maybe aggregate into some miniature rockets or jets so they can ride the jetstream to spread across the Earth's atmosphere, get into human bloodstreams and hide, strike on a timer. Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".
And this is all it takes to get to “full golem.” “Solar power and atmospheric CHON.” Yes, I suppose there are carbon, hydrogen, oxygen and nitrogen in the atmosphere—mainly the last three. This is why the stratosphere is already dominated by giant hydrogen-powered jellyfish that eat air plankton. Or something. Also I really like “hide, strike on a timer.”
The rationalist believes in intelligence the way normies believe in God. To the normie who believes in God, God can do anything. To the rationalist, anything is possible with sufficient IQ.
Gravity makes everything fall downward, including apples? No problem—nanotech can build a nano-apple that falls upward, attaches itself to the tree, connects the tree to the Internet, takes over the world and turns everyone into an apple. Like magic.
Yes, as Feynman put it, there is “plenty of room at the bottom.” But not magic. There is no magic at the bottom—only physics. And we know all the physics, or think we do.
(I am using awkward constructions like 'high cognitive power' because standard English terms like 'smart' or 'intelligent' appear to me to function largely as status synonyms. 'Superintelligence' sounds to most people like 'something above the top of the status hierarchy that went to double college', and they don't understand why that would be all that dangerous? Earthlings have no word and indeed no standard native concept that means 'actually useful cognitive power'. A large amount of failure to panic sufficiently, seems to me to stem from a lack of appreciation for the incredible potential lethality of this thing that Earthlings as a culture have not named.)
But Earthlings do have words for “impossible” and “diminishing returns.” Anyone smart who went to high school, which unfortunately I did, has seen the diminishing returns of “actually useful cognitive power.” Or do the returns stop diminishing above the “double college” level?
One simple way to answer this question is a fortiori. If AI can take over the world, it should be able to solve any much, much, much smaller problem—like taking over a high school.
Suppose some super-smart nerdy freshman, perhaps not too dissimilar to the young Eliezer, invents “actually useful cognitive power,” and even better solves the “AI alignment” problem. His golem actually serves him. His goals are its goals.
And his goal is—to be elected class president. Or better yet, homecoming king. And he also wants to actually control the school—curriculum, staffing, the whole deal. Start! No “miniature rockets or jets” required.
I cannot tell you how disappointing it is to see the smartest minds of my generation focused on issues like this, with their noses pointed in the air as they hop onto BART, dodging tents in the street and apparent corpses leaking precious bodily fluids. Can the nanobots clean up San Francisco, too? Why not? Apparently they can do anything.
To paraphrase Oliver Cromwell: in the bowels of Christ, I beseech thee—get a grip.
From the CA article
"To engage in Alt-Right thinking is to turn oneself into a vacuous skinsuit animated by raw stupidity. There is literally not a single shred of non-stupidity in the entire thing. Mencius Moldbug, stupid."
it's pretty impressive that your blog posts can make these people need to have group therapy sessions (circling, maybe?) and then just twitch and writhe in pure seething anger. the only evidence they could level against your cathedral hypothesis is that Elon is buying Twitter. lmao. Twitter's not a real place
I do mostly agree with you, but I just have a couple of comments. Firstly, while in theory GET requests are read-only, there are many systems out there on the internet that allow GET requests to perform non-trivial work that is not idempotent. Yes, this is really awful, and they should probably be POST requests. But they aren't. I'm sure that with a bit of searching you could find some awfully designed web server somewhere that will send emails based on GET requests.
Secondly, and a little more abstractly, perhaps we're already being manipulated by AI? There's much said about "bots" in places like Twitter and Facebook: people being manipulated by procedurally generated "fake news" articles, millions of tweets being made by bots, etc. Given that these bots are already manipulating normies into believing total nonsense, is it not conceivable that some hypothetical general AI could do an even better job of manipulating people into doings its "bidding"? It might not be "the AI will send an email asking for a particular mixture of proteins" but more along the lines of "the AI will manipulate society through social media into funding nanotechnology research 100x what it receives now" until such a time that it _can_ just send an email asking for a particular mixture of proteins.
Personally, I think that it's all exceedingly unlikely. It would require a general artificial intelligence with planning and strategic facilities orders of magnitude better than we have now. As you say, current "AI" can't even remember the beginning of a sentence by the time it gets to the end of it, or multiply 3-digit numbers. However, nor can most humans, probably, without pen and paper, and it doesn't seem to stop us going about our daily lives. If we need to multiply big numbers we can delegate it to simpler machines. The biggest problem with current "AI"/deep learning research is that it is very bad at certain things that traditional programming finds trivial, like remembering things and consulting its memory, sorting, maths, etc. So it stands to reason that a lot of current AI research is going into trying to work out how to marry "AI" with fixed function units that can do stuff like multiply, or consult a database, where necessary: AI that can delegate to fixed function units might be that leap that lets "AI" become much more powerful... But I'm still skeptical.