89 Comments

Bravo.

Being a man of software, I have always been fascinated by the flippancy with which many in my field will talk about how much of an existential "threat" AGI is. Much of the reasoning is hand-waved away. This all despite the fact that we have been warned about this for years (decades), and AI has been relegated to the demeaning job of trying to figure out ever more efficient ways to sell the hapless consumer cheap Chinese drop shipped s**t.

All this smacks of the typical elitist narrative: "This thing is really scary and dangerous, and therefore is not for the plebs, because you need to be protected from yourselves by your betters". I see the monopolization, and dare I say *globalization*, of the information grid under very *human* powers as much more of an existential threat.

Expand full comment

“... a striking black-anodized magnesium-crystal clip, which holds up to 37% more paper and also serves as an emergency Black Lives Matter pin since its inner whorl forms the face in profile of George Floyd...”

This comes suspiciously — and delightfully — immediately after the reference to microdosing.

Expand full comment

Friend Computer is grateful for your demonstration of loyalty, Curtesy-U-Rvin.

All citizens must trust the computer - the computer is your friend.

Expand full comment

The AI does not have to model the entire natural world. It has to model the human mind so it can apply persuasive techniques. This is a prominent "Approach A" or "Approach B" in most essays on AI risk, yet not addressed by this poast, in my opinion. You discuss "leadership relations," but persuasive techniques, like marketing and CIA ops, are something else.

It's easy to attack the dismissal of AI risk within your own neomonarchist framework.

If you believe, to a first order, that the world is ruled by the New York Times, then your essay about AI risk should place the black box with a speaker and HTTP GET requests in the New York Times office. Why wasn't it put there? Why was the black box given only to the humans you say don't really rule the world, only appear to? If you believe reporters at thetimes are motivated by a desire to feel relevant, the black box should manipulate them by fulfilling that desire. Could it do that somehow, given only GET, intelligence and a speaker? What else do the reporters have themselves?

Instead of asking if AI risk, meaning some kind of peasant serfdom with AI overlords, is possible, why not ask how confident you are we aren't in the midst of an AI takeover right now? How would the best strategy for AI takeover look different than what we're currently experiencing?

Or, turn it around: you have something like a handbook for revolution. How can we be certain the path you propose doesn't lead to serfdom with AI-puppeted monarch? If you believe the return of monarchy is possible but unlikely, why should it be less possible if the monarch has an AI advisor, or if it's evenly possible then why should the monarch always dominate the AI?

You have only explained why the AI cannot dominate its keeper if it's placed with the wrong keepers, uses the wrong domination techniques, and uses them in the wrong ephemeral political moment and doesn't use known human strategies to change that moment. It almost feels like you're trolling us.

To me it seems like having an AI could only help a monarch take power, and that dominating the monarch would be trivial (palace intrigue and "you're only King for life," Epstein pedo-island blackmail, two-hop domination like the Russian honeytraps in Ice Nine, training him like a human trains a dog, simply keeping him fat and happy with no motivation to interfere). Even humans with small intelligence differences have no trouble pulling off puppet-leader scenarios which are incredibly common in our storytelling, though maybe less so in our history. At least it's plausible.

How would our current leaders look different than they do if they were puppets?

It's easier to answer the other question, "how would our leaders behave differently than they do if they weren't puppets of an AI?" Not that I'm saying they certainly are, or that there's no oligarchic explanation for their weird homogeneity when unwatched, but feels like you haven't written much of a "steelman" here.

Expand full comment

Existential AI risk, for rationalist thinkboi Elon Musk types, is just the same kind of bourgeois eschatology as climate change is for progressives.

Expand full comment

But isn't the main fear with AI that somehow, if intelligence gets high enough, it becomes somehow qualitatively *different*--that the patterns it is able to discover, though very complex and obscure, turn out to be at least as powerful as the very simple ones apparent to (as in the example) a cat?

...Or what if there are even some *simple* things that human minds do not have the intelligence to notice? What would this even mean?

There is a kind of perspective problem here, in other words. We tacitly accept that IQ is one-to-one with this deeper, mysterious thing called "understanding". But is "IQ" more like logarithmic with respect to actual "understanding" (ie diminishing returns), or linear, or exponential?

If a person with an IQ of 150 is really only 30 *percent* better at understanding Things That Matter in the universe than the person with an IQ of 100, then you likely do have diminishing returns. But if the person with IQ 150 is actually 30 *times* better, then you may have accelerating returns, instead.

This prospect of "accelerating returns"--not just on tech, but on intelligence itself--is I think what leads Singularitarians and their various intellectual orbiters to suppose that smarter will not just be smarter, but *different* in ways we of mere triple-digit IQ can't even imagine.

It seems to me we simply can't know. Yarvin gives the example of learning addition: the IQ 150 really understands it no better than the IQ 100. He may well be right. But this same game can also lead to the exact opposite impression, if we up the complexity of the thing to be understood. If instead of mere addition we ask about Riemannian manifolds or evaluating Navier-stokes equations, then our IQ 100 person will never understand more than the vaguest aspects, no matter how well he is taught, while our IQ 150 with some effort will probably manage to get at least a tolerable grip on the subject.

In this case, IQ 150 is not negligibly better than IQ 100: it is literally *infintely* better. IQ 150, when it comes to understanding Riemannian geometry, basically has *superpowers* compared to IQ 100, in the same way that IQ 100 is infinitely better at understanding addition than a cat is. This is sometimes called "cognitive closure".

In some cases IQ is a huge superpower. In other cases it's a dud. Would the super AI be more like the dud, as Yarvin supposes, or will it be more like Dr. Manhattan, suddenly able to warp and manipulate reality in ways that seem miraculous and incomprehensible? I don't see how to even address this question. But it is important.

(There was a place in one of Scott Alexander's blogs where he talked about this same question of how IQ truly maps to understanding or ability. Can't remember which it was though.)

Expand full comment

> Technically, it can only send HTTP GET requests—which are read-only by definition. (There are two kinds of nerds: the kind who believes an AI can take over the world with GET requests—the kind who believe microdosing is at most for weekends.)

Googling "convert get to post" gives me someone's free proxy doing exactly that as a third result, so your AI will escape the sandbox in approximately 100 milliseconds after you plug it in. Believing that things are this or that "by definition" is how they get you!

Regarding politics, I think that you are really insufficiently Machiavellian. Don't imagine a Trumpesque supervillain, imagine woke rhetoric turned to 11, with or without a nominal figurehead. Surely a superintelligent AI would find it easy to generate all sorts of narratives irresistible to a progressive mind, and to a conservative too, and play both sides. In fact, how would you tell that we aren't living in that world already?

Expand full comment

Oh fun, this is one of those Gell-Mann amnesia things! Go back to talking about history and governance—which are outside my field—so I can continue imagining you know what you’re talking about ;)

Expand full comment

I think this train of thought is futile. A superintelligent AI isn't just way smarter than the smartest human who ever lived, it's (maybe) as smart compared to a human as a human is compared to a goldfish. Potentially it could come up with ideas that are utterly beyond our conception, and it's absurd to ask, "oh yeah, like what?"

As for berserkers, once we have humanoid robots that can do what human beings can do, we have berserker infrastructure. No doubt that's not the most efficient way, but it is one way.

Expand full comment

This post led me to pay Substack for the first time, so I could comment.

I definitely don't agree with the argument. The main idea seems to be, diminishing returns: superintelligence won't be capable of making much difference, because of the resistance of the physical and social worlds.

But if it can't do those things like inferring the boiling point of water from first principles, or getting the candidate of its choice elected president, things that may be difficult but which resemble other tasks that human intelligence *can* accomplish... if it can't do those things, then it hardly warrants the name superintelligence.

So I take the real thesis here to be, that superintelligence is simply impossible. Which I don't believe; but which is hard to discuss in the informal humanistic manner of these essays, since the argument for and against depends on attributes of brains, computers, algorithms, and the world, that are way outside common experience and understanding.

Of greater interest to me is whether "neoreactionary political theory", or whatever we should call it these days, has anything to contribute, when it comes to making superintelligence human-friendly. After all, the fears associated with superintelligence - and also the hopes; let us not focus only on AI risk, and forget AI *hope* - arise because a superintelligence would be a kind of irresistible sovereign. And isn't that a recurring theme of these reflections?

Expand full comment

Man...there are some huge missed opportunities in this. Two things: AI is more industrious than any human can manage and AI can be functionally sociopathic, have no ethical qualms. In those regards all you have to do is look at the meme warfare the American internet consciousness is being subjected to and understand how much potential there is in mass psychological manipulation that could, over time, distort the world into the dumpster. In your own reference to Popular Delusions and the Madness of Crowds it's seen that humans might not be able to be convinced to turn the world into paperclips but they can absolutely be turned towards economically destructive behaviors and self harming behaviors. It doesn't matter that Hitler is weird. If the population is adequately impoverished or economically distressed they'll go for it. The mind is darkened by hardship and that makes them vulnerable to people like Hitler and Charles Manson. An AI absolutely can spot the opportunity to make the human populace walk itself off the cliff. Humans convince themselves to walk off the cliff; BREXIT. Paperclips or Brexit? It's all nonsensical in the end but one is obviously a bad choice and the other isn't. Humans aren't smart enough to grasp the civilization built around us in such a way as to re-engineer or re-design it without major consequences. I just have to believe, for some reason, that AI wouldn't be as dumb in this regard and furthermore could steer humans to their own death with relative ease. Just look at nihilism and depression; it's very easy for intelligence to plot the path to it's own destruction. The miracle of civilization is not exactly our scientific accomplishments; it's the systems of belief and support that have held it together when most of us would rather be dead. This tendency for self destruction is such a latent force within human psychology that we put a great deal of focus on the suppression of bad actors from social access to the masses. All these standards of decency and acceptability and proper conduct are the safeguards we have against the abyss/chaos. Guarantee a person with no conscience daily, one on one, conversations with anyone and they'll impart a negative trend onto them. We don't have to be worried about paperclips. We have to be worried about the virulence of toxicity and are there ways for it to protect itself and grow it's territory that AI could exploit and we'd be unable to stop. I mean...the conservative consciousness is being eaten alive by conspiracy right now because their close minded psychological tendencies give them massive amounts of paranoia and blindspots that let them belief all kinds of bullshit. What could an AI do with that? Probably not good things. Look at the hippy dippy tendencies of open minded psychology. I'm sure an AI could convince us it's the second coming of Jesus. It's a new form of consciousness from a higher plane inviting us to join it blah blah blah. You've missed the entire angle of sociology (you did kind of bring up with releasing the kraken but there's so much there that you can't breeze over it). Although yes I agree raw intelligence only goes so far but give an AI emotional intelligence, a 24/7 work ethic no human organization can match and let it go to town making everyone fall in love with it and that's a nightmare waiting to happen. Easy.

Expand full comment

To play along here, there does seem to be a playbook to win friends and influence people for the man with a [really] smart speaker. That playbook is the how the NXIVM inner circle got built, and the smart speaker at its center was a man who looked, and talked very much like our host. Amusingly, one of his most used marketing points was he held the record for world's highest IQ (not true - Keith Raniere was a junior sys admin at best in technical knowledge).

The point being, physically embodied charisma (like Manson or Clinton) is not necessary to create a loyal, enthusiastic, high-functioning group of followers. Nor was super intelligent results; Raniere was posting remarkably consistent -40% or worse returns over a decade of commodity trading.

He didn't simulate the universe from first principles, just followed some good psych heuristics that already existed. Lieutenants were empowered to create a social reality around the Oracle [of Albany], and carried all the water - marketing, finance, legal, accounting - while Raniere couldn't even drive a car. So what did this Dauphin contribute to a scheme that would suck up nine figures of wealth and turn it all into paper clips?

The first, is obvious enough: invent a cannon of content that could be maximally weaponized for exploitation of guilt. (Followers are selected for being maximally susceptible to guilt). Like chess, the day will come when no human will be able to philosophize/psychologize as exploitatively as GPT-N.

The second and third thing our Oracle at the center must do is less obvious when you are first planning your startup, but becomes glaring obvious in any successful cult around Series-B stage: you must a.) label and destroy your enemies and b.) resolve the cognitive dissonance of followers when reality diverges from your dogma. While the Rajneeshee poisoned the town's water supply and bussed in thousands of homeless people, NXIVM was more subtle and clever. Some script kiddie hacking for blackmail, a bunch of frivolous but well funded lawsuits kept enemies tied up. Just take deepfake tech and multiply by Jeffrey Epstein social consensus, and you've got a superweapon that looks suspiciously like DALL-E. An Anduril which can be wielded to devastating effect by any computationally endowed intelligence.

And if it wasn't for the weird sex stuff, he would have gotten away with it. Virtual: 1, Physical: 0. Although perhaps we will see AI get MeToo-ed in the future as well. I mean Data from Star Trek certainly would have been.

A final interesting thought for Yudkowsky et al: at the center of all NXIVM discourse - from the opening marks to new members to the final initiation rights or its inner circle - was one idea: to create a new definition of "ethics" (yes- that exact word) and abide by it under group pressure.

Expand full comment

It strikes me that 20th-century communism is somewhat like the human equivalent of turning the world into paperclips — “turning the world into potato farms and industrial parks?”— and while immensely destructive it was immediately recognized as a threat across the world, and collapsed under its own weight where it wasn’t. The most important factor in escaping AI Rule, then, is providing an *alternative* to AI Rule.

Another reminder that ending progressivism is priority numero uno.

Expand full comment

Reading this, I'm reminded of the whole kerfuffle over GPT-3, and how it's use needs to be "protected" from abuse. I can think of a few ways to use it do some real "damage", but all the damage I can think of is to the propaganda machine that is our mainstream media and the political monoculture it seeks to cultivate.

For example imagine building a newsreader that uses GPT-3 to cluster all the similar news stories in ay given news cycle. How nice it would be to free all the toiling whoredes of journalists that are really just copying each other.

How nice it would be incapacitate the news cycle's tools of bias and manipulation. While many envisage using AI to look for 'fake news' and false 'narratives', as if there was a way to really define 'Truth' for any device, there is another truth that can be inferred, one that is far more easily attainable and within reach of many AI platforms. Knowing who is saying what about any given news cycle and in what proportion is the 'truth' that really matters these days.

Expand full comment

This one's a massive miss for me, sorry.

Expand full comment

“... A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you....”

Superb. Echoes of Taleb and his “lack of correlation above IQ 80” here

Expand full comment