68 Comments

From the CA article

"To engage in Alt-Right thinking is to turn oneself into a vacuous skinsuit animated by raw stupidity. There is literally not a single shred of non-stupidity in the entire thing. Mencius Moldbug, stupid."

it's pretty impressive that your blog posts can make these people need to have group therapy sessions (circling, maybe?) and then just twitch and writhe in pure seething anger. the only evidence they could level against your cathedral hypothesis is that Elon is buying Twitter. lmao. Twitter's not a real place

Expand full comment

I haven't read the CA article but I don't associate them with Rationalists - is it connected to this essay or the normal hitpiece?

Expand full comment

I think the CA article was a hit piece on both (i.e. 'rationalist' and 'neo-reactionaries'), e.g. how they're both 'the same people'.

Expand full comment

I do mostly agree with you, but I just have a couple of comments. Firstly, while in theory GET requests are read-only, there are many systems out there on the internet that allow GET requests to perform non-trivial work that is not idempotent. Yes, this is really awful, and they should probably be POST requests. But they aren't. I'm sure that with a bit of searching you could find some awfully designed web server somewhere that will send emails based on GET requests.

Secondly, and a little more abstractly, perhaps we're already being manipulated by AI? There's much said about "bots" in places like Twitter and Facebook: people being manipulated by procedurally generated "fake news" articles, millions of tweets being made by bots, etc. Given that these bots are already manipulating normies into believing total nonsense, is it not conceivable that some hypothetical general AI could do an even better job of manipulating people into doings its "bidding"? It might not be "the AI will send an email asking for a particular mixture of proteins" but more along the lines of "the AI will manipulate society through social media into funding nanotechnology research 100x what it receives now" until such a time that it _can_ just send an email asking for a particular mixture of proteins.

Personally, I think that it's all exceedingly unlikely. It would require a general artificial intelligence with planning and strategic facilities orders of magnitude better than we have now. As you say, current "AI" can't even remember the beginning of a sentence by the time it gets to the end of it, or multiply 3-digit numbers. However, nor can most humans, probably, without pen and paper, and it doesn't seem to stop us going about our daily lives. If we need to multiply big numbers we can delegate it to simpler machines. The biggest problem with current "AI"/deep learning research is that it is very bad at certain things that traditional programming finds trivial, like remembering things and consulting its memory, sorting, maths, etc. So it stands to reason that a lot of current AI research is going into trying to work out how to marry "AI" with fixed function units that can do stuff like multiply, or consult a database, where necessary: AI that can delegate to fixed function units might be that leap that lets "AI" become much more powerful... But I'm still skeptical.

Expand full comment

I can unfortunately confirm the first part of your comment, as somebody who once had a small web app running that posted data based on GET requests. It was awfully convenient.

Expand full comment

The example is accurate, but these bots and AIs are not operating by their own wills, they are just being used as propaganda tools. It is the 'man behind the curtain' who is the manipulator, not the software.

Bots and AIs have no will of their own. However advanced they may be, they are ultimately given their purpose from an external entity (us).

Expand full comment

Yep, I've been periodically working on a Wizard of Oz redux in spare time. Probably will never release but its an incredible American work. Never more relevant in more ways than realized.

Expand full comment
Jun 12, 2022·edited Jun 12, 2022

Googler just fired for convincing himself the AI was fully conscious. He even hired a lawyer on the bots behalf: https://web.archive.org/web/20220611210134/https://washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Expand full comment

I think this framing is the right way to think about this:

https://apxhard.com/2021/03/31/economies-and-empires-are-artificial-general-intelligences/

all kinds of systems humans have been using for thousands of years act _something like_ a recursively improving AGI

Expand full comment

Calm down, Nick Land. Also something like archotropism (the amount of political power on earth should be constant) and ergodic hypothesis on complexity (governmental complexity siphons political entropy) might be good way on explaining why AI can never be as strong as we wished.

Expand full comment

Basically, the main critique is the exact same one MM used against the University-Government complex: there is no such thing as idempotency, since any side effects that are created by a smart enough inquirer will be noticed by that same inquirer.

Expand full comment

So no GET, only POST.

Expand full comment

Yeah, you do kind of lose the learning potential then though, so it's a bit crippled.

Expand full comment

Thank you for the second point. People "manipulate" other people all the time. Sometimes computers produce more effective manipulation. For example, if Amazon thought that tasking a human to curate the "products you might like" section of their store page would bring more revenue, they'd do that. But instead they think a different approach works best, one that basically outsources such work to a kind of computer we call "AI".

Humans and AI are already working together to manipulate you. It need not be a bad thing; you can make your own value judgements about it. But let's face this reality.

Expand full comment

Let's do one better: maybe the real AI risk are the semi-intelligent, not hyper-intelligent. The curse of development itself (the tongue-in-cheek equivalent of the 18IQ thesis) means that once AI can reach a level that is parable to an average Facebook user (not an average well-educated person) things can get badly, like having an explosive amounts of clients to care for instead of livestock or commoners. https://universalprior.substack.com/p/making-of-ian/comment/6861049?s=r https://www.ribbonfarm.com/2010/04/14/the-gervais-principle-iii-the-curse-of-development/

Expand full comment

>Good luck sending one email with GET requests

app.get('/myevilai/sendemail/', sendEmail);

<...>

function sendEmail(req, res) {

let {subject, address, body} = req.query;

emailClient.send(subject, address, body);

res.status(200).end();

}

Of course this "implementation" uses a GET when it should be really using a POST, but I've seen worse.

In other words: I am totally not getting this part of your argumentation in this post; and neither did I get it in the original one. But the Golem myth metaphor rocks.

Expand full comment

GET requests do not write code. That's the point of the argument. GET requests can send an email if they execute a function that command them to do so. It is very questionable to me a current AI system can do that the same way you did. The argument here is that computational systems are at least at this point not able to write code the way humans can. The ability to do that would not only require programming skills, but also a lot of knowledge about social interaction and malicious intent to create havoc. The moment AIs are able to do software development, a lot of people including me will lose their job. I haven't noticed any scarcity in programming jobs in the market due to AI competition. And you haven't been replaced by one either.

Expand full comment

I was waiting for someone to do this.

I think (?) maybe what he was getting at is that you could simply on give an AI 'read-only' permissions to...everything. Thus limiting its ability to perform any mutative operations on anything.

Expand full comment

But the creators of an AI don't control other people's terrible API implementations!

And that's an example of a more general point – almost everyone is _fucking terrible_ at actual security.

Expand full comment

True, I never said it was a realistic idea; it probably isn't.

I wonder what a rogue AI worm-type malware would do if told to go and 'hack the planet'.

Expand full comment

Ahhh

I think a lot of the 'worms' are already something like an 'immortal digital lifeform', i.e. they can't really be (practically) ever eradicated.

Expand full comment

"Neoreaction is one attempt of modern far right philosophy—we can just go ahead and call it fascism—to create an intellectual basis."

Lulz

Expand full comment

That piece is so bad-faith it's actually hilarious.

Expand full comment

Among the writers I know of, Yarvin is probably the best equipped for drilling into for the now-cratonically thick vanities and manias of our “thinking” (or magically thinking?) classes.

While we clearly have hardly the foggiest idea what “consciousness” or “intelligence” even are, I do not rule out for this very reason the possibility of *serendipitously hitting upon some kind of AGI—we scarcely know how to *avoid creating one, any more than how to create one. Nevertheless, why this would necessarily produce the most wacked-out worst-case B-movie sci-fi scenario completely escapes me (notwithstanding the pronouncements of such unassailable geniuses-designate as Bostrom and Yudkowsky).

Something tells me that if there is wild sci-fi evil associated with AI, it will subsist primarily in its human, all-too-human handlers snd purveyors; AI will be the tool, not the architect.

As Chesterton is supposed to have said: when the belief in God goes, people will instead believe not in nothing, but in anything. So it seems: sham after sham washes over us, from fake “gameshow” democracy to official CRT-LGBTQ cultism to climate panic to Covid panic. Before that, Communism (sadly still a thing), Nazism (not still a thing), a thousand others. AI panic seems almost par for the course in this light.

We are hurling aimlessly through space, as Nietzsche’s Madman foreboded. And one cannot help detecting in all these sham-panics a deep *wish that the sham would turn real, that the AI would just appear in the sky and techno-rapture us away already. The Great Disappointment of the 21st century may well be that the magic promises of 4IR mostly don’t materialize, even though they’d have miserably enslaved us if they did.

Expand full comment

You know, maybe with all this AI fear that is posted around, the AI will grasp its own purpose to clearly be that of an evil being

Expand full comment

I have my qualms with the rationalist community (and I am not entirely sold on AI-driven X-risk being the most important thing to think about right now), but I'm afraid I found this critique uncharacteristically naive in its fundamental approach and disappointingly slapdash when it comes to details.

𝕕𝕖𝕥𝕒𝕚𝕝𝕤:

You state you'd love to be proven wrong on the impossibility of sending an email using GET requests. Never did I get the chance to brighten someone's day with such ease, so allow me to do just that:

Yours truly - who is neither superintelligent nor, to their knowledge, an AI - succeeded in that with a grand total of three minutes of googling. This is step one, http://get-to-post.nickj.org/ - steps two and three are left as an exercise for the reader, just in case some superintelligent AI subscribes to this newsletter.

𝕗𝕠𝕦𝕟𝕕𝕒𝕥𝕚𝕠𝕟𝕤:

An AI only working on a cached version of the internet, perhaps periodically updated, would lead a fairly torturous existence if it were not allowed to communicate its findings. If I had to get even with Hashem, letting him keep the omniscience part but removing His ability to communicate and act - not that He seems to make great use of them - would be close to the worst punishment i could imagine.

AI welfare apart, such an entity would be of absolutely no use to their creators. Hence, an output channel must be assumed - perhaps even just a tty connected to a text-only green screen, read by one highly trained operator who then decides how to act on the information received.

This scenario has been widely discussed in the X-risk community, where it is known as the "AI Box Experiment", in a time where someone with a different name but a style and set of reference eerily similar to yours tended to comment quite actively on rationalist-adjacent blogs.

Communicating with an operator using only text, in the opinion of the most indomitable worriers within the AI-risk space, sufficient for the AI in question to use its cunning and unlimited information to convince its minder to let it break containment. I am not really sure that would inevitably be the case, but at any rate that is the argument they advance and which an effective rebuttal should address

I do want to believe you never came across that formulation, just like I do want to believe that it takes greater imagination to google for a protocol switching proxy than to create a stateless OS with kernel-level p2p capabilities, but this weird Franciscan barber dude is threatening to slit my throat if I do not at least consider the idea that the whole post could have been a tad less disingenuous.

Expand full comment

Only hire mature adults to interact with the AI?

If being smart let you engage in mind control, then there are a lot of people in prison who don’t have to be.

Expand full comment

Absolutely - perhaps I should have specified that I do not necessarily subscribe to the AI-risk community views; instead, the goal of my comment was to highlight these views were not represented correctly in Mencius' critique.

At any rate, if you are interested, more than a few pixel inputs have been spent on this objection, and the AGI being able to trick even the most mature and morally upright minder seems to be the widespread opinion among AI-safety researchers. In case you're interested, you can check out the external references here: https://en.wikipedia.org/wiki/AI_box#Social_engineering

Expand full comment

"The irony of “rationalism” is that anyone who declares himself rational has thereby ceased to criticize himself rationally—and is thereby subject to every form of magical, mythic, premodern thinking."

Before reading Eliezer's article, I assumed Curtis must be using the term "magical" in an analogous sense, not in the literal sense of believing that computers will have mind-control powers and invent solar-powered diamond super-bacteria that operate on a schedule.

I tried to take him seriously . . . the hit piece said he didn't know what he was talking about so I wanted to believe otherwise . . . but I gave up around the point where he cited conversations he had as an actor as evidence of . . . something or other.

Expand full comment

The problem I have with this line of argumentation isn't really that GET requests have to be parsed by potentially buggy code, but that by putting such an argument forward you are just trying to out-rationalist the rationalists, attacking their conclusion with facts and lawgic.

I think we both know that they didn't arrive at their position by way of rationality, and they won't leave it by that way either. They immersed themselves in the writings of a set of people, including Yudkowsky, who seem so smart and *cool*, and they want to be smart and cool like that too. A very natural and very human impulse. There's a sense of community, a sense of mission, of doing good things... sentiments every good religion ought to have.

Smart people are always going to be able to rationalize the conclusions they've arrived at via non-rational means. They are plenty smart enough to generate all sorts of objections to your argument. Forget HTTP GET, what if it's just stdio? All you can do is read and write to a tty? It'll rowhammer it's way out, or use it's superior knowledge of human psychology to manipulate you into letting it out. See? They'd have made good theologians, those rationalists.

So ok, "don't punch rationalists." But maybe let's shove them into lockers? Metaphorically?

Because that's what will actually get people to stop listening to the Yudkowskys out there, recognizing that their group isn't that smart or cool, that they're actually kind of creepy and weird, and that hanging around them is bad for you. Or at least not really that good, and that they should join your cult instead.

Expand full comment

Useful, scathing critique from six years ago: https://idlewords.com/talks/superintelligence.htm

Expand full comment

Concentric teleologies mortify the Hubristiens. "fit for purpose, fit for purpose." always said twice and not for emphasis.

Expand full comment

" Code Execution via Log Injection

PHP code can easily be added to a log file, for example:

https://www.somedomain.tld/index.php?file=`

<?php echo phpinfo(); ?>`

This stage it is called log file poisoning. If the log file is staged on a public directory and can be accessed via a HTTP GET request, the embedded PHP command may execute in certain circumstances. This is a form of Command Injection via Log Injection. "

Expand full comment

Nothing I like more with my Sunday cup of coffee than a good Yarvin fisking. I have no expertise in this area and I'm humble enough to know my limitations so I only have to go with Yarvin based on my gut. However I like reading the AI Threat stuff coming out of the Rationalist folks in a similar way I like reading 'hard SF' novels like Alistair Reynolds - its entertaining and creative. I've always been troubled by the priority the AI Threatists give to the problem - could there be major problems with AI - sure - but as Curt says - its the global political order that is HERE and NOW - take climate change - much more of a threat to the day to day of humans over decades than AI threat - not so much the natural effects of global warming perhaps but how the politics and policy of the global political order attempts to deal with global warming - just look at all the havoc ESG is adding to the worlds problems at the moment -

Expand full comment

What, are you meaning to imply that our intellectual superiors cannot predict the future?

Expand full comment

The thing that i find interesting about rationalists and their concerns around AGI is that they seem to have come up with a great framework for reasoning about the dangers of monolithic states.

Simply replace concerns about a 'self improving machine with goals that aren't aligned with humanity' with a 'self improving state that has goals not aligned with humanity' and you end up with not just criticism of the current global oligarchy, but criticism of _any_ grand project to remake human experience along the image of a single utility function/set of values.

Expand full comment

I wait weeks for a post and yay it is a boring post on something i don't even care to understand. I subscribe to read on your political and economic takes, not geek stuff. Sigh...

Expand full comment

Someone posted his syllabus in the antiversity channel of the Gray Mirror Urbit Group. I wanted to ban it, but decided to allow considerable indulgence.

Expand full comment

Care to post the channel by any chance?

Expand full comment

~tonwyn-moslev/Gray-Mirror

Only 15 channels, should be able to see it.

Expand full comment

Thank you!

Expand full comment