68 Comments

From the CA article

"To engage in Alt-Right thinking is to turn oneself into a vacuous skinsuit animated by raw stupidity. There is literally not a single shred of non-stupidity in the entire thing. Mencius Moldbug, stupid."

it's pretty impressive that your blog posts can make these people need to have group therapy sessions (circling, maybe?) and then just twitch and writhe in pure seething anger. the only evidence they could level against your cathedral hypothesis is that Elon is buying Twitter. lmao. Twitter's not a real place

Expand full comment

I do mostly agree with you, but I just have a couple of comments. Firstly, while in theory GET requests are read-only, there are many systems out there on the internet that allow GET requests to perform non-trivial work that is not idempotent. Yes, this is really awful, and they should probably be POST requests. But they aren't. I'm sure that with a bit of searching you could find some awfully designed web server somewhere that will send emails based on GET requests.

Secondly, and a little more abstractly, perhaps we're already being manipulated by AI? There's much said about "bots" in places like Twitter and Facebook: people being manipulated by procedurally generated "fake news" articles, millions of tweets being made by bots, etc. Given that these bots are already manipulating normies into believing total nonsense, is it not conceivable that some hypothetical general AI could do an even better job of manipulating people into doings its "bidding"? It might not be "the AI will send an email asking for a particular mixture of proteins" but more along the lines of "the AI will manipulate society through social media into funding nanotechnology research 100x what it receives now" until such a time that it _can_ just send an email asking for a particular mixture of proteins.

Personally, I think that it's all exceedingly unlikely. It would require a general artificial intelligence with planning and strategic facilities orders of magnitude better than we have now. As you say, current "AI" can't even remember the beginning of a sentence by the time it gets to the end of it, or multiply 3-digit numbers. However, nor can most humans, probably, without pen and paper, and it doesn't seem to stop us going about our daily lives. If we need to multiply big numbers we can delegate it to simpler machines. The biggest problem with current "AI"/deep learning research is that it is very bad at certain things that traditional programming finds trivial, like remembering things and consulting its memory, sorting, maths, etc. So it stands to reason that a lot of current AI research is going into trying to work out how to marry "AI" with fixed function units that can do stuff like multiply, or consult a database, where necessary: AI that can delegate to fixed function units might be that leap that lets "AI" become much more powerful... But I'm still skeptical.

Expand full comment

>Good luck sending one email with GET requests

app.get('/myevilai/sendemail/', sendEmail);

<...>

function sendEmail(req, res) {

let {subject, address, body} = req.query;

emailClient.send(subject, address, body);

res.status(200).end();

}

Of course this "implementation" uses a GET when it should be really using a POST, but I've seen worse.

In other words: I am totally not getting this part of your argumentation in this post; and neither did I get it in the original one. But the Golem myth metaphor rocks.

Expand full comment

"Neoreaction is one attempt of modern far right philosophyโ€”we can just go ahead and call it fascismโ€”to create an intellectual basis."

Lulz

Expand full comment

Among the writers I know of, Yarvin is probably the best equipped for drilling into for the now-cratonically thick vanities and manias of our โ€œthinkingโ€ (or magically thinking?) classes.

While we clearly have hardly the foggiest idea what โ€œconsciousnessโ€ or โ€œintelligenceโ€ even are, I do not rule out for this very reason the possibility of *serendipitously hitting upon some kind of AGIโ€”we scarcely know how to *avoid creating one, any more than how to create one. Nevertheless, why this would necessarily produce the most wacked-out worst-case B-movie sci-fi scenario completely escapes me (notwithstanding the pronouncements of such unassailable geniuses-designate as Bostrom and Yudkowsky).

Something tells me that if there is wild sci-fi evil associated with AI, it will subsist primarily in its human, all-too-human handlers snd purveyors; AI will be the tool, not the architect.

As Chesterton is supposed to have said: when the belief in God goes, people will instead believe not in nothing, but in anything. So it seems: sham after sham washes over us, from fake โ€œgameshowโ€ democracy to official CRT-LGBTQ cultism to climate panic to Covid panic. Before that, Communism (sadly still a thing), Nazism (not still a thing), a thousand others. AI panic seems almost par for the course in this light.

We are hurling aimlessly through space, as Nietzscheโ€™s Madman foreboded. And one cannot help detecting in all these sham-panics a deep *wish that the sham would turn real, that the AI would just appear in the sky and techno-rapture us away already. The Great Disappointment of the 21st century may well be that the magic promises of 4IR mostly donโ€™t materialize, even though theyโ€™d have miserably enslaved us if they did.

Expand full comment

I have my qualms with the rationalist community (and I am not entirely sold on AI-driven X-risk being the most important thing to think about right now), but I'm afraid I found this critique uncharacteristically naive in its fundamental approach and disappointingly slapdash when it comes to details.

๐••๐•–๐•ฅ๐•’๐•š๐•๐•ค:

You state you'd love to be proven wrong on the impossibility of sending an email using GET requests. Never did I get the chance to brighten someone's day with such ease, so allow me to do just that:

Yours truly - who is neither superintelligent nor, to their knowledge, an AI - succeeded in that with a grand total of three minutes of googling. This is step one, http://get-to-post.nickj.org/ - steps two and three are left as an exercise for the reader, just in case some superintelligent AI subscribes to this newsletter.

๐•—๐• ๐•ฆ๐•Ÿ๐••๐•’๐•ฅ๐•š๐• ๐•Ÿ๐•ค:

An AI only working on a cached version of the internet, perhaps periodically updated, would lead a fairly torturous existence if it were not allowed to communicate its findings. If I had to get even with Hashem, letting him keep the omniscience part but removing His ability to communicate and act - not that He seems to make great use of them - would be close to the worst punishment i could imagine.

AI welfare apart, such an entity would be of absolutely no use to their creators. Hence, an output channel must be assumed - perhaps even just a tty connected to a text-only green screen, read by one highly trained operator who then decides how to act on the information received.

This scenario has been widely discussed in the X-risk community, where it is known as the "AI Box Experiment", in a time where someone with a different name but a style and set of reference eerily similar to yours tended to comment quite actively on rationalist-adjacent blogs.

Communicating with an operator using only text, in the opinion of the most indomitable worriers within the AI-risk space, sufficient for the AI in question to use its cunning and unlimited information to convince its minder to let it break containment. I am not really sure that would inevitably be the case, but at any rate that is the argument they advance and which an effective rebuttal should address

I do want to believe you never came across that formulation, just like I do want to believe that it takes greater imagination to google for a protocol switching proxy than to create a stateless OS with kernel-level p2p capabilities, but this weird Franciscan barber dude is threatening to slit my throat if I do not at least consider the idea that the whole post could have been a tad less disingenuous.

Expand full comment

"The irony of โ€œrationalismโ€ is that anyone who declares himself rational has thereby ceased to criticize himself rationallyโ€”and is thereby subject to every form of magical, mythic, premodern thinking."

Before reading Eliezer's article, I assumed Curtis must be using the term "magical" in an analogous sense, not in the literal sense of believing that computers will have mind-control powers and invent solar-powered diamond super-bacteria that operate on a schedule.

I tried to take him seriously . . . the hit piece said he didn't know what he was talking about so I wanted to believe otherwise . . . but I gave up around the point where he cited conversations he had as an actor as evidence of . . . something or other.

Expand full comment

The problem I have with this line of argumentation isn't really that GET requests have to be parsed by potentially buggy code, but that by putting such an argument forward you are just trying to out-rationalist the rationalists, attacking their conclusion with facts and lawgic.

I think we both know that they didn't arrive at their position by way of rationality, and they won't leave it by that way either. They immersed themselves in the writings of a set of people, including Yudkowsky, who seem so smart and *cool*, and they want to be smart and cool like that too. A very natural and very human impulse. There's a sense of community, a sense of mission, of doing good things... sentiments every good religion ought to have.

Smart people are always going to be able to rationalize the conclusions they've arrived at via non-rational means. They are plenty smart enough to generate all sorts of objections to your argument. Forget HTTP GET, what if it's just stdio? All you can do is read and write to a tty? It'll rowhammer it's way out, or use it's superior knowledge of human psychology to manipulate you into letting it out. See? They'd have made good theologians, those rationalists.

So ok, "don't punch rationalists." But maybe let's shove them into lockers? Metaphorically?

Because that's what will actually get people to stop listening to the Yudkowskys out there, recognizing that their group isn't that smart or cool, that they're actually kind of creepy and weird, and that hanging around them is bad for you. Or at least not really that good, and that they should join your cult instead.

Expand full comment

Useful, scathing critique from six years ago: https://idlewords.com/talks/superintelligence.htm

Expand full comment

Concentric teleologies mortify the Hubristiens. "fit for purpose, fit for purpose." always said twice and not for emphasis.

Expand full comment

" Code Execution via Log Injection

PHP code can easily be added to a log file, for example:

https://www.somedomain.tld/index.php?file=`

<?php echo phpinfo(); ?>`

This stage it is called log file poisoning. If the log file is staged on a public directory and can be accessed via a HTTP GET request, the embedded PHP command may execute in certain circumstances. This is a form of Command Injection via Log Injection. "

Expand full comment

Nothing I like more with my Sunday cup of coffee than a good Yarvin fisking. I have no expertise in this area and I'm humble enough to know my limitations so I only have to go with Yarvin based on my gut. However I like reading the AI Threat stuff coming out of the Rationalist folks in a similar way I like reading 'hard SF' novels like Alistair Reynolds - its entertaining and creative. I've always been troubled by the priority the AI Threatists give to the problem - could there be major problems with AI - sure - but as Curt says - its the global political order that is HERE and NOW - take climate change - much more of a threat to the day to day of humans over decades than AI threat - not so much the natural effects of global warming perhaps but how the politics and policy of the global political order attempts to deal with global warming - just look at all the havoc ESG is adding to the worlds problems at the moment -

Expand full comment

What, are you meaning to imply that our intellectual superiors cannot predict the future?

Expand full comment

The thing that i find interesting about rationalists and their concerns around AGI is that they seem to have come up with a great framework for reasoning about the dangers of monolithic states.

Simply replace concerns about a 'self improving machine with goals that aren't aligned with humanity' with a 'self improving state that has goals not aligned with humanity' and you end up with not just criticism of the current global oligarchy, but criticism of _any_ grand project to remake human experience along the image of a single utility function/set of values.

Expand full comment

I wait weeks for a post and yay it is a boring post on something i don't even care to understand. I subscribe to read on your political and economic takes, not geek stuff. Sigh...

Expand full comment

Someone posted his syllabus in the antiversity channel of the Gray Mirror Urbit Group. I wanted to ban it, but decided to allow considerable indulgence.

Expand full comment