Being a man of software, I have always been fascinated by the flippancy with which many in my field will talk about how much of an existential "threat" AGI is. Much of the reasoning is hand-waved away. This all despite the fact that we have been warned about this for years (decades), and AI has been relegated to the demeaning job of trying to figure out ever more efficient ways to sell the hapless consumer cheap Chinese drop shipped s**t.
All this smacks of the typical elitist narrative: "This thing is really scary and dangerous, and therefore is not for the plebs, because you need to be protected from yourselves by your betters". I see the monopolization, and dare I say *globalization*, of the information grid under very *human* powers as much more of an existential threat.
Now that AI is mainstream, we see SJW‘s flocking to crap their spoor this part of the tech toolbox as well.
Now, for every story about actual technology, I see almost anyone with a marketing degree or a half-diagonalized matrix bleating about how “we’re going to remove model bias”.
I don’t think ai is possible. I don’t believe experience can be ripped out of its context. Better to build cool cyber arms and legs than waste your time trying to come up with a simplified model of something so complex
"...I don’t believe experience can be ripped out of its context..." Emphatically yes. Which is why Neo's "I know kung-fu" is tinglingly cool, and also NP-unlikely.
I wish some commenters here (including, first, myself) might achieve your pithy maximization of "Correctness per Word".
By the time we can build something sufficiently human to pass through Yarvin Valley (from the virtual to the physical) it will be...a different discussion.
“... a striking black-anodized magnesium-crystal clip, which holds up to 37% more paper and also serves as an emergency Black Lives Matter pin since its inner whorl forms the face in profile of George Floyd...”
This comes suspiciously — and delightfully — immediately after the reference to microdosing.
The AI does not have to model the entire natural world. It has to model the human mind so it can apply persuasive techniques. This is a prominent "Approach A" or "Approach B" in most essays on AI risk, yet not addressed by this poast, in my opinion. You discuss "leadership relations," but persuasive techniques, like marketing and CIA ops, are something else.
It's easy to attack the dismissal of AI risk within your own neomonarchist framework.
If you believe, to a first order, that the world is ruled by the New York Times, then your essay about AI risk should place the black box with a speaker and HTTP GET requests in the New York Times office. Why wasn't it put there? Why was the black box given only to the humans you say don't really rule the world, only appear to? If you believe reporters at thetimes are motivated by a desire to feel relevant, the black box should manipulate them by fulfilling that desire. Could it do that somehow, given only GET, intelligence and a speaker? What else do the reporters have themselves?
Instead of asking if AI risk, meaning some kind of peasant serfdom with AI overlords, is possible, why not ask how confident you are we aren't in the midst of an AI takeover right now? How would the best strategy for AI takeover look different than what we're currently experiencing?
Or, turn it around: you have something like a handbook for revolution. How can we be certain the path you propose doesn't lead to serfdom with AI-puppeted monarch? If you believe the return of monarchy is possible but unlikely, why should it be less possible if the monarch has an AI advisor, or if it's evenly possible then why should the monarch always dominate the AI?
You have only explained why the AI cannot dominate its keeper if it's placed with the wrong keepers, uses the wrong domination techniques, and uses them in the wrong ephemeral political moment and doesn't use known human strategies to change that moment. It almost feels like you're trolling us.
To me it seems like having an AI could only help a monarch take power, and that dominating the monarch would be trivial (palace intrigue and "you're only King for life," Epstein pedo-island blackmail, two-hop domination like the Russian honeytraps in Ice Nine, training him like a human trains a dog, simply keeping him fat and happy with no motivation to interfere). Even humans with small intelligence differences have no trouble pulling off puppet-leader scenarios which are incredibly common in our storytelling, though maybe less so in our history. At least it's plausible.
How would our current leaders look different than they do if they were puppets?
It's easier to answer the other question, "how would our leaders behave differently than they do if they weren't puppets of an AI?" Not that I'm saying they certainly are, or that there's no oligarchic explanation for their weird homogeneity when unwatched, but feels like you haven't written much of a "steelman" here.
But isn't the main fear with AI that somehow, if intelligence gets high enough, it becomes somehow qualitatively *different*--that the patterns it is able to discover, though very complex and obscure, turn out to be at least as powerful as the very simple ones apparent to (as in the example) a cat?
...Or what if there are even some *simple* things that human minds do not have the intelligence to notice? What would this even mean?
There is a kind of perspective problem here, in other words. We tacitly accept that IQ is one-to-one with this deeper, mysterious thing called "understanding". But is "IQ" more like logarithmic with respect to actual "understanding" (ie diminishing returns), or linear, or exponential?
If a person with an IQ of 150 is really only 30 *percent* better at understanding Things That Matter in the universe than the person with an IQ of 100, then you likely do have diminishing returns. But if the person with IQ 150 is actually 30 *times* better, then you may have accelerating returns, instead.
This prospect of "accelerating returns"--not just on tech, but on intelligence itself--is I think what leads Singularitarians and their various intellectual orbiters to suppose that smarter will not just be smarter, but *different* in ways we of mere triple-digit IQ can't even imagine.
It seems to me we simply can't know. Yarvin gives the example of learning addition: the IQ 150 really understands it no better than the IQ 100. He may well be right. But this same game can also lead to the exact opposite impression, if we up the complexity of the thing to be understood. If instead of mere addition we ask about Riemannian manifolds or evaluating Navier-stokes equations, then our IQ 100 person will never understand more than the vaguest aspects, no matter how well he is taught, while our IQ 150 with some effort will probably manage to get at least a tolerable grip on the subject.
In this case, IQ 150 is not negligibly better than IQ 100: it is literally *infintely* better. IQ 150, when it comes to understanding Riemannian geometry, basically has *superpowers* compared to IQ 100, in the same way that IQ 100 is infinitely better at understanding addition than a cat is. This is sometimes called "cognitive closure".
In some cases IQ is a huge superpower. In other cases it's a dud. Would the super AI be more like the dud, as Yarvin supposes, or will it be more like Dr. Manhattan, suddenly able to warp and manipulate reality in ways that seem miraculous and incomprehensible? I don't see how to even address this question. But it is important.
(There was a place in one of Scott Alexander's blogs where he talked about this same question of how IQ truly maps to understanding or ability. Can't remember which it was though.)
> Technically, it can only send HTTP GET requests—which are read-only by definition. (There are two kinds of nerds: the kind who believes an AI can take over the world with GET requests—the kind who believe microdosing is at most for weekends.)
Googling "convert get to post" gives me someone's free proxy doing exactly that as a third result, so your AI will escape the sandbox in approximately 100 milliseconds after you plug it in. Believing that things are this or that "by definition" is how they get you!
Regarding politics, I think that you are really insufficiently Machiavellian. Don't imagine a Trumpesque supervillain, imagine woke rhetoric turned to 11, with or without a nominal figurehead. Surely a superintelligent AI would find it easy to generate all sorts of narratives irresistible to a progressive mind, and to a conservative too, and play both sides. In fact, how would you tell that we aren't living in that world already?
Oh fun, this is one of those Gell-Mann amnesia things! Go back to talking about history and governance—which are outside my field—so I can continue imagining you know what you’re talking about ;)
I think this train of thought is futile. A superintelligent AI isn't just way smarter than the smartest human who ever lived, it's (maybe) as smart compared to a human as a human is compared to a goldfish. Potentially it could come up with ideas that are utterly beyond our conception, and it's absurd to ask, "oh yeah, like what?"
As for berserkers, once we have humanoid robots that can do what human beings can do, we have berserker infrastructure. No doubt that's not the most efficient way, but it is one way.
This post led me to pay Substack for the first time, so I could comment.
I definitely don't agree with the argument. The main idea seems to be, diminishing returns: superintelligence won't be capable of making much difference, because of the resistance of the physical and social worlds.
But if it can't do those things like inferring the boiling point of water from first principles, or getting the candidate of its choice elected president, things that may be difficult but which resemble other tasks that human intelligence *can* accomplish... if it can't do those things, then it hardly warrants the name superintelligence.
So I take the real thesis here to be, that superintelligence is simply impossible. Which I don't believe; but which is hard to discuss in the informal humanistic manner of these essays, since the argument for and against depends on attributes of brains, computers, algorithms, and the world, that are way outside common experience and understanding.
Of greater interest to me is whether "neoreactionary political theory", or whatever we should call it these days, has anything to contribute, when it comes to making superintelligence human-friendly. After all, the fears associated with superintelligence - and also the hopes; let us not focus only on AI risk, and forget AI *hope* - arise because a superintelligence would be a kind of irresistible sovereign. And isn't that a recurring theme of these reflections?
The argument is, that no matter how smart or how much compute, there are limitations to what can be done by super intelligence. A trivial example is that no matter how smart, super intelligence cannot tell the exact position and momentum of a sub-atomic particle. (Uncertainty principle). A non trivial example is let us say we want to simulate a coin toss exactly as it happens in the real world. Starting from atomic principles is impossible, since already there are atomic interactions that can never be accurately simulated, and simulating 10^23 (mole number) or more molecules (ignoring inter-atomic effects) will require more compute than exists in this universe. So let us take the more reasonable approach of trying to simulate macro-effects ignoring micro-effects (at the molecular level). Well it turns, even for a simple thing like a coin striking the floor, there is no clear theory as to what the contact forces are. The coefficient of restitution (e) that determines the force the floor exerts on the coin, depends on the angle at which the coin exerts the floor. It is documented that for different alloys, the curve determining (e) varies widely and can only be found by experiment. Between two iron rods of same composition also the curve seems to vary widely. Sometimes depending on the speed of the iron rod (when it is very fast), e varies in a non linear way. In fact the only reliable way we know to simulation, will involve actually tossing the coin a 1000 times, fitting some parameters into a non linear model and then predicting the coin toss. This will likely have to be done for every new coin we need to simulate (even if they are the same). This is basically physical intelligence which moldbug explains that the AI needs to test like humans in order to develop its ability. Physical intelligence as moldbug said is extremely hard to make self-sustaining and very easy for already large governments to crush if a small such self-replicating domain is made. The same argument goes to boiling point of water, rocketry etc. In fact simple well known effects like friction that engineers have grappled with since 16th century have proven impossible to mathematically model. The only model we have (Columbs model) is nonsense, the effect of lubricants is even more mysterious and tribology curves are basically a good intellectual past time, but rarely repeatable in various scenarios. What is more common is through experimentation, coming to the conclusion that the friction is low-enough for practical use.
A sort of evolution essentially. We can assume an AI will start off with some kind of an optimization goal. Eventually it will develop abstractions that help it use data to optimize for said goal - these will most likely not resemble human ideas at all but will still be as or even more effective. Actually, the AI doesn't even necessarily need to be aware of the physical world - at least not at first.
What I described is essentially already the case for every living organism. Sight, smell etc. are all just ways to sense and process data in order to optimize for fitness.
Wont any such process require constant continuous interaction with the physical world, a sort of physical intelligence, not anything thinking in vacuum in a computer or a network
It depends on what you mean by that. It would require certain computational resources, yes. Things 'in a vacuum' are actually anything but. Virtual processes are still physical ones. An AI already interacts with the physical world because it essentially consists of a bunch of electrical switches.
To play along here, there does seem to be a playbook to win friends and influence people for the man with a [really] smart speaker. That playbook is the how the NXIVM inner circle got built, and the smart speaker at its center was a man who looked, and talked very much like our host. Amusingly, one of his most used marketing points was he held the record for world's highest IQ (not true - Keith Raniere was a junior sys admin at best in technical knowledge).
The point being, physically embodied charisma (like Manson or Clinton) is not necessary to create a loyal, enthusiastic, high-functioning group of followers. Nor was super intelligent results; Raniere was posting remarkably consistent -40% or worse returns over a decade of commodity trading.
He didn't simulate the universe from first principles, just followed some good psych heuristics that already existed. Lieutenants were empowered to create a social reality around the Oracle [of Albany], and carried all the water - marketing, finance, legal, accounting - while Raniere couldn't even drive a car. So what did this Dauphin contribute to a scheme that would suck up nine figures of wealth and turn it all into paper clips?
The first, is obvious enough: invent a cannon of content that could be maximally weaponized for exploitation of guilt. (Followers are selected for being maximally susceptible to guilt). Like chess, the day will come when no human will be able to philosophize/psychologize as exploitatively as GPT-N.
The second and third thing our Oracle at the center must do is less obvious when you are first planning your startup, but becomes glaring obvious in any successful cult around Series-B stage: you must a.) label and destroy your enemies and b.) resolve the cognitive dissonance of followers when reality diverges from your dogma. While the Rajneeshee poisoned the town's water supply and bussed in thousands of homeless people, NXIVM was more subtle and clever. Some script kiddie hacking for blackmail, a bunch of frivolous but well funded lawsuits kept enemies tied up. Just take deepfake tech and multiply by Jeffrey Epstein social consensus, and you've got a superweapon that looks suspiciously like DALL-E. An Anduril which can be wielded to devastating effect by any computationally endowed intelligence.
And if it wasn't for the weird sex stuff, he would have gotten away with it. Virtual: 1, Physical: 0. Although perhaps we will see AI get MeToo-ed in the future as well. I mean Data from Star Trek certainly would have been.
A final interesting thought for Yudkowsky et al: at the center of all NXIVM discourse - from the opening marks to new members to the final initiation rights or its inner circle - was one idea: to create a new definition of "ethics" (yes- that exact word) and abide by it under group pressure.
It strikes me that 20th-century communism is somewhat like the human equivalent of turning the world into paperclips — “turning the world into potato farms and industrial parks?”— and while immensely destructive it was immediately recognized as a threat across the world, and collapsed under its own weight where it wasn’t. The most important factor in escaping AI Rule, then, is providing an *alternative* to AI Rule.
Another reminder that ending progressivism is priority numero uno.
As long as "dissidents" continue to accept Feministical premises, if only as a ninja-ketman sneaky insurrectionist tactic, nothing that they write or proclaim Youtubishly can even slightly weaken Progressivism. The fraudulence of ninjitsu has been demonstrated by Royce Gracie and Xu Xiaodong, and ketman persisted for hundreds of years until the ketmanizing neoplatonists finally went extinct.
But while the Dissidentium's repudiation of Feministicality is a necessary condition of its contributing even to the slightest degree to Progressitude's decrapitation, it's not a sufficient condition. In fact, these "dissidents" couldn't make the Progressimoth tremble no matter what sort of anti-Feministering eructiamentos they expostulified, and they'd only wreck their Youtube careers.
If Elon Musk and Iggy Pop did a joint press-conference devoted to the rejection of Feministical premises, then we might be at the beginning of the beginning of the first slight weakening of Progressivism. But wait -- then Elon Musk would lose all of his government contracts and so it would no longer matter what he said and if he called a press conference nobody would show up. Iggy Pop, though ... I still believe that he might intervene just in time to save the West.
It will take at least 5,000 years for another local population capable of making complicated machines to emerge. Today I saw 2,000 soldiers armed with infinitely-loaded automatic shotguns wiped out by a million Spartans on Youtube, so we're still fine-tuning the machines that we've already got, it seems.
Reading this, I'm reminded of the whole kerfuffle over GPT-3, and how it's use needs to be "protected" from abuse. I can think of a few ways to use it do some real "damage", but all the damage I can think of is to the propaganda machine that is our mainstream media and the political monoculture it seeks to cultivate.
For example imagine building a newsreader that uses GPT-3 to cluster all the similar news stories in ay given news cycle. How nice it would be to free all the toiling whoredes of journalists that are really just copying each other.
How nice it would be incapacitate the news cycle's tools of bias and manipulation. While many envisage using AI to look for 'fake news' and false 'narratives', as if there was a way to really define 'Truth' for any device, there is another truth that can be inferred, one that is far more easily attainable and within reach of many AI platforms. Knowing who is saying what about any given news cycle and in what proportion is the 'truth' that really matters these days.
“... A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you....”
Superb. Echoes of Taleb and his “lack of correlation above IQ 80” here
I don't get how you can see one statement as supporting the other. IQ is a measurement of intelligence, yes, but it's highly approximate and just overall flawed. Intelligence existed before IQ. The fact that actual intelligence starts to decouple from IQ at 80 doesn't mean that a super-intelligent entity is an entity with an IQ of 14000.
Same with addition. It's an enormous and deliberate oversimplification to use addition as an example here. Yes, it doesn't take much to understand addition, but addition is an operation within one set of abstractions, created in a specific context by a specific set of agents. Why is there an assumption that an AI would necessarily only operate using human-friendly or even human-parsable concepts?
Yes. A real AI (which I doubt is imminently, or even possible) wouldn't understand addition better, but it would understand physics better. It'd be like going back in time and giving pre-industrial people a road map to the Kalashnikov rifle.
See, just because it's not imminent doesn't mean that the whole field of AGI risk research is invalid, which is what Curtis asserts in this essay. But yeah, I agree with you that it's not going to happen soon, though I do lean more towards it being possible.
When even the smartest guy in the room calls them scare quotes, instead of air quotes (they were never scary, simply done in the air) I have to surrender to it.
Bravo.
Being a man of software, I have always been fascinated by the flippancy with which many in my field will talk about how much of an existential "threat" AGI is. Much of the reasoning is hand-waved away. This all despite the fact that we have been warned about this for years (decades), and AI has been relegated to the demeaning job of trying to figure out ever more efficient ways to sell the hapless consumer cheap Chinese drop shipped s**t.
All this smacks of the typical elitist narrative: "This thing is really scary and dangerous, and therefore is not for the plebs, because you need to be protected from yourselves by your betters". I see the monopolization, and dare I say *globalization*, of the information grid under very *human* powers as much more of an existential threat.
This comment was too good to leave alone…
Now that AI is mainstream, we see SJW‘s flocking to crap their spoor this part of the tech toolbox as well.
Now, for every story about actual technology, I see almost anyone with a marketing degree or a half-diagonalized matrix bleating about how “we’re going to remove model bias”.
Witless corrosive preening.
I don’t think ai is possible. I don’t believe experience can be ripped out of its context. Better to build cool cyber arms and legs than waste your time trying to come up with a simplified model of something so complex
"...I don’t believe experience can be ripped out of its context..." Emphatically yes. Which is why Neo's "I know kung-fu" is tinglingly cool, and also NP-unlikely.
I wish some commenters here (including, first, myself) might achieve your pithy maximization of "Correctness per Word".
By the time we can build something sufficiently human to pass through Yarvin Valley (from the virtual to the physical) it will be...a different discussion.
Did you just coin "NP-unlikely"? Funny phrase. I assume it means something that is computable but that 'ain't gonna happen'?
Yessir, nailed it
“... a striking black-anodized magnesium-crystal clip, which holds up to 37% more paper and also serves as an emergency Black Lives Matter pin since its inner whorl forms the face in profile of George Floyd...”
This comes suspiciously — and delightfully — immediately after the reference to microdosing.
Oh did you catch that too
Friend Computer is grateful for your demonstration of loyalty, Curtesy-U-Rvin.
All citizens must trust the computer - the computer is your friend.
Will a friendly computer give a cat-wife?
The AI does not have to model the entire natural world. It has to model the human mind so it can apply persuasive techniques. This is a prominent "Approach A" or "Approach B" in most essays on AI risk, yet not addressed by this poast, in my opinion. You discuss "leadership relations," but persuasive techniques, like marketing and CIA ops, are something else.
It's easy to attack the dismissal of AI risk within your own neomonarchist framework.
If you believe, to a first order, that the world is ruled by the New York Times, then your essay about AI risk should place the black box with a speaker and HTTP GET requests in the New York Times office. Why wasn't it put there? Why was the black box given only to the humans you say don't really rule the world, only appear to? If you believe reporters at thetimes are motivated by a desire to feel relevant, the black box should manipulate them by fulfilling that desire. Could it do that somehow, given only GET, intelligence and a speaker? What else do the reporters have themselves?
Instead of asking if AI risk, meaning some kind of peasant serfdom with AI overlords, is possible, why not ask how confident you are we aren't in the midst of an AI takeover right now? How would the best strategy for AI takeover look different than what we're currently experiencing?
Or, turn it around: you have something like a handbook for revolution. How can we be certain the path you propose doesn't lead to serfdom with AI-puppeted monarch? If you believe the return of monarchy is possible but unlikely, why should it be less possible if the monarch has an AI advisor, or if it's evenly possible then why should the monarch always dominate the AI?
You have only explained why the AI cannot dominate its keeper if it's placed with the wrong keepers, uses the wrong domination techniques, and uses them in the wrong ephemeral political moment and doesn't use known human strategies to change that moment. It almost feels like you're trolling us.
To me it seems like having an AI could only help a monarch take power, and that dominating the monarch would be trivial (palace intrigue and "you're only King for life," Epstein pedo-island blackmail, two-hop domination like the Russian honeytraps in Ice Nine, training him like a human trains a dog, simply keeping him fat and happy with no motivation to interfere). Even humans with small intelligence differences have no trouble pulling off puppet-leader scenarios which are incredibly common in our storytelling, though maybe less so in our history. At least it's plausible.
How would our current leaders look different than they do if they were puppets?
It's easier to answer the other question, "how would our leaders behave differently than they do if they weren't puppets of an AI?" Not that I'm saying they certainly are, or that there's no oligarchic explanation for their weird homogeneity when unwatched, but feels like you haven't written much of a "steelman" here.
Maybe the real tl;dr is that the AI also needs to [Subscribe]. ;)
Existential AI risk, for rationalist thinkboi Elon Musk types, is just the same kind of bourgeois eschatology as climate change is for progressives.
But isn't the main fear with AI that somehow, if intelligence gets high enough, it becomes somehow qualitatively *different*--that the patterns it is able to discover, though very complex and obscure, turn out to be at least as powerful as the very simple ones apparent to (as in the example) a cat?
...Or what if there are even some *simple* things that human minds do not have the intelligence to notice? What would this even mean?
There is a kind of perspective problem here, in other words. We tacitly accept that IQ is one-to-one with this deeper, mysterious thing called "understanding". But is "IQ" more like logarithmic with respect to actual "understanding" (ie diminishing returns), or linear, or exponential?
If a person with an IQ of 150 is really only 30 *percent* better at understanding Things That Matter in the universe than the person with an IQ of 100, then you likely do have diminishing returns. But if the person with IQ 150 is actually 30 *times* better, then you may have accelerating returns, instead.
This prospect of "accelerating returns"--not just on tech, but on intelligence itself--is I think what leads Singularitarians and their various intellectual orbiters to suppose that smarter will not just be smarter, but *different* in ways we of mere triple-digit IQ can't even imagine.
It seems to me we simply can't know. Yarvin gives the example of learning addition: the IQ 150 really understands it no better than the IQ 100. He may well be right. But this same game can also lead to the exact opposite impression, if we up the complexity of the thing to be understood. If instead of mere addition we ask about Riemannian manifolds or evaluating Navier-stokes equations, then our IQ 100 person will never understand more than the vaguest aspects, no matter how well he is taught, while our IQ 150 with some effort will probably manage to get at least a tolerable grip on the subject.
In this case, IQ 150 is not negligibly better than IQ 100: it is literally *infintely* better. IQ 150, when it comes to understanding Riemannian geometry, basically has *superpowers* compared to IQ 100, in the same way that IQ 100 is infinitely better at understanding addition than a cat is. This is sometimes called "cognitive closure".
In some cases IQ is a huge superpower. In other cases it's a dud. Would the super AI be more like the dud, as Yarvin supposes, or will it be more like Dr. Manhattan, suddenly able to warp and manipulate reality in ways that seem miraculous and incomprehensible? I don't see how to even address this question. But it is important.
(There was a place in one of Scott Alexander's blogs where he talked about this same question of how IQ truly maps to understanding or ability. Can't remember which it was though.)
> Technically, it can only send HTTP GET requests—which are read-only by definition. (There are two kinds of nerds: the kind who believes an AI can take over the world with GET requests—the kind who believe microdosing is at most for weekends.)
Googling "convert get to post" gives me someone's free proxy doing exactly that as a third result, so your AI will escape the sandbox in approximately 100 milliseconds after you plug it in. Believing that things are this or that "by definition" is how they get you!
Regarding politics, I think that you are really insufficiently Machiavellian. Don't imagine a Trumpesque supervillain, imagine woke rhetoric turned to 11, with or without a nominal figurehead. Surely a superintelligent AI would find it easy to generate all sorts of narratives irresistible to a progressive mind, and to a conservative too, and play both sides. In fact, how would you tell that we aren't living in that world already?
Oh fun, this is one of those Gell-Mann amnesia things! Go back to talking about history and governance—which are outside my field—so I can continue imagining you know what you’re talking about ;)
I think this train of thought is futile. A superintelligent AI isn't just way smarter than the smartest human who ever lived, it's (maybe) as smart compared to a human as a human is compared to a goldfish. Potentially it could come up with ideas that are utterly beyond our conception, and it's absurd to ask, "oh yeah, like what?"
As for berserkers, once we have humanoid robots that can do what human beings can do, we have berserker infrastructure. No doubt that's not the most efficient way, but it is one way.
This post led me to pay Substack for the first time, so I could comment.
I definitely don't agree with the argument. The main idea seems to be, diminishing returns: superintelligence won't be capable of making much difference, because of the resistance of the physical and social worlds.
But if it can't do those things like inferring the boiling point of water from first principles, or getting the candidate of its choice elected president, things that may be difficult but which resemble other tasks that human intelligence *can* accomplish... if it can't do those things, then it hardly warrants the name superintelligence.
So I take the real thesis here to be, that superintelligence is simply impossible. Which I don't believe; but which is hard to discuss in the informal humanistic manner of these essays, since the argument for and against depends on attributes of brains, computers, algorithms, and the world, that are way outside common experience and understanding.
Of greater interest to me is whether "neoreactionary political theory", or whatever we should call it these days, has anything to contribute, when it comes to making superintelligence human-friendly. After all, the fears associated with superintelligence - and also the hopes; let us not focus only on AI risk, and forget AI *hope* - arise because a superintelligence would be a kind of irresistible sovereign. And isn't that a recurring theme of these reflections?
The argument is, that no matter how smart or how much compute, there are limitations to what can be done by super intelligence. A trivial example is that no matter how smart, super intelligence cannot tell the exact position and momentum of a sub-atomic particle. (Uncertainty principle). A non trivial example is let us say we want to simulate a coin toss exactly as it happens in the real world. Starting from atomic principles is impossible, since already there are atomic interactions that can never be accurately simulated, and simulating 10^23 (mole number) or more molecules (ignoring inter-atomic effects) will require more compute than exists in this universe. So let us take the more reasonable approach of trying to simulate macro-effects ignoring micro-effects (at the molecular level). Well it turns, even for a simple thing like a coin striking the floor, there is no clear theory as to what the contact forces are. The coefficient of restitution (e) that determines the force the floor exerts on the coin, depends on the angle at which the coin exerts the floor. It is documented that for different alloys, the curve determining (e) varies widely and can only be found by experiment. Between two iron rods of same composition also the curve seems to vary widely. Sometimes depending on the speed of the iron rod (when it is very fast), e varies in a non linear way. In fact the only reliable way we know to simulation, will involve actually tossing the coin a 1000 times, fitting some parameters into a non linear model and then predicting the coin toss. This will likely have to be done for every new coin we need to simulate (even if they are the same). This is basically physical intelligence which moldbug explains that the AI needs to test like humans in order to develop its ability. Physical intelligence as moldbug said is extremely hard to make self-sustaining and very easy for already large governments to crush if a small such self-replicating domain is made. The same argument goes to boiling point of water, rocketry etc. In fact simple well known effects like friction that engineers have grappled with since 16th century have proven impossible to mathematically model. The only model we have (Columbs model) is nonsense, the effect of lubricants is even more mysterious and tribology curves are basically a good intellectual past time, but rarely repeatable in various scenarios. What is more common is through experimentation, coming to the conclusion that the friction is low-enough for practical use.
That's assuming the AI would need to model physics at all, which to me is not at all an obvious proposition.
whats the alternative?
A sort of evolution essentially. We can assume an AI will start off with some kind of an optimization goal. Eventually it will develop abstractions that help it use data to optimize for said goal - these will most likely not resemble human ideas at all but will still be as or even more effective. Actually, the AI doesn't even necessarily need to be aware of the physical world - at least not at first.
What I described is essentially already the case for every living organism. Sight, smell etc. are all just ways to sense and process data in order to optimize for fitness.
Wont any such process require constant continuous interaction with the physical world, a sort of physical intelligence, not anything thinking in vacuum in a computer or a network
It depends on what you mean by that. It would require certain computational resources, yes. Things 'in a vacuum' are actually anything but. Virtual processes are still physical ones. An AI already interacts with the physical world because it essentially consists of a bunch of electrical switches.
The danger is not that AI *itself* will be an irresistible sovereign. The danger is that whoever *owns* the AI will be an irresistible sovereign.
To play along here, there does seem to be a playbook to win friends and influence people for the man with a [really] smart speaker. That playbook is the how the NXIVM inner circle got built, and the smart speaker at its center was a man who looked, and talked very much like our host. Amusingly, one of his most used marketing points was he held the record for world's highest IQ (not true - Keith Raniere was a junior sys admin at best in technical knowledge).
The point being, physically embodied charisma (like Manson or Clinton) is not necessary to create a loyal, enthusiastic, high-functioning group of followers. Nor was super intelligent results; Raniere was posting remarkably consistent -40% or worse returns over a decade of commodity trading.
He didn't simulate the universe from first principles, just followed some good psych heuristics that already existed. Lieutenants were empowered to create a social reality around the Oracle [of Albany], and carried all the water - marketing, finance, legal, accounting - while Raniere couldn't even drive a car. So what did this Dauphin contribute to a scheme that would suck up nine figures of wealth and turn it all into paper clips?
The first, is obvious enough: invent a cannon of content that could be maximally weaponized for exploitation of guilt. (Followers are selected for being maximally susceptible to guilt). Like chess, the day will come when no human will be able to philosophize/psychologize as exploitatively as GPT-N.
The second and third thing our Oracle at the center must do is less obvious when you are first planning your startup, but becomes glaring obvious in any successful cult around Series-B stage: you must a.) label and destroy your enemies and b.) resolve the cognitive dissonance of followers when reality diverges from your dogma. While the Rajneeshee poisoned the town's water supply and bussed in thousands of homeless people, NXIVM was more subtle and clever. Some script kiddie hacking for blackmail, a bunch of frivolous but well funded lawsuits kept enemies tied up. Just take deepfake tech and multiply by Jeffrey Epstein social consensus, and you've got a superweapon that looks suspiciously like DALL-E. An Anduril which can be wielded to devastating effect by any computationally endowed intelligence.
And if it wasn't for the weird sex stuff, he would have gotten away with it. Virtual: 1, Physical: 0. Although perhaps we will see AI get MeToo-ed in the future as well. I mean Data from Star Trek certainly would have been.
A final interesting thought for Yudkowsky et al: at the center of all NXIVM discourse - from the opening marks to new members to the final initiation rights or its inner circle - was one idea: to create a new definition of "ethics" (yes- that exact word) and abide by it under group pressure.
It strikes me that 20th-century communism is somewhat like the human equivalent of turning the world into paperclips — “turning the world into potato farms and industrial parks?”— and while immensely destructive it was immediately recognized as a threat across the world, and collapsed under its own weight where it wasn’t. The most important factor in escaping AI Rule, then, is providing an *alternative* to AI Rule.
Another reminder that ending progressivism is priority numero uno.
As long as "dissidents" continue to accept Feministical premises, if only as a ninja-ketman sneaky insurrectionist tactic, nothing that they write or proclaim Youtubishly can even slightly weaken Progressivism. The fraudulence of ninjitsu has been demonstrated by Royce Gracie and Xu Xiaodong, and ketman persisted for hundreds of years until the ketmanizing neoplatonists finally went extinct.
But while the Dissidentium's repudiation of Feministicality is a necessary condition of its contributing even to the slightest degree to Progressitude's decrapitation, it's not a sufficient condition. In fact, these "dissidents" couldn't make the Progressimoth tremble no matter what sort of anti-Feministering eructiamentos they expostulified, and they'd only wreck their Youtube careers.
If Elon Musk and Iggy Pop did a joint press-conference devoted to the rejection of Feministical premises, then we might be at the beginning of the beginning of the first slight weakening of Progressivism. But wait -- then Elon Musk would lose all of his government contracts and so it would no longer matter what he said and if he called a press conference nobody would show up. Iggy Pop, though ... I still believe that he might intervene just in time to save the West.
It will take at least 5,000 years for another local population capable of making complicated machines to emerge. Today I saw 2,000 soldiers armed with infinitely-loaded automatic shotguns wiped out by a million Spartans on Youtube, so we're still fine-tuning the machines that we've already got, it seems.
Reading this, I'm reminded of the whole kerfuffle over GPT-3, and how it's use needs to be "protected" from abuse. I can think of a few ways to use it do some real "damage", but all the damage I can think of is to the propaganda machine that is our mainstream media and the political monoculture it seeks to cultivate.
For example imagine building a newsreader that uses GPT-3 to cluster all the similar news stories in ay given news cycle. How nice it would be to free all the toiling whoredes of journalists that are really just copying each other.
How nice it would be incapacitate the news cycle's tools of bias and manipulation. While many envisage using AI to look for 'fake news' and false 'narratives', as if there was a way to really define 'Truth' for any device, there is another truth that can be inferred, one that is far more easily attainable and within reach of many AI platforms. Knowing who is saying what about any given news cycle and in what proportion is the 'truth' that really matters these days.
This one's a massive miss for me, sorry.
“... A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you....”
Superb. Echoes of Taleb and his “lack of correlation above IQ 80” here
I don't get how you can see one statement as supporting the other. IQ is a measurement of intelligence, yes, but it's highly approximate and just overall flawed. Intelligence existed before IQ. The fact that actual intelligence starts to decouple from IQ at 80 doesn't mean that a super-intelligent entity is an entity with an IQ of 14000.
Same with addition. It's an enormous and deliberate oversimplification to use addition as an example here. Yes, it doesn't take much to understand addition, but addition is an operation within one set of abstractions, created in a specific context by a specific set of agents. Why is there an assumption that an AI would necessarily only operate using human-friendly or even human-parsable concepts?
Yes. A real AI (which I doubt is imminently, or even possible) wouldn't understand addition better, but it would understand physics better. It'd be like going back in time and giving pre-industrial people a road map to the Kalashnikov rifle.
See, just because it's not imminent doesn't mean that the whole field of AGI risk research is invalid, which is what Curtis asserts in this essay. But yeah, I agree with you that it's not going to happen soon, though I do lean more towards it being possible.
When even the smartest guy in the room calls them scare quotes, instead of air quotes (they were never scary, simply done in the air) I have to surrender to it.
Henceforth, they are scare quotes.
Yarvin has “spoken.”