I always found it characteristic of the “rationalists” (sorry but a self-aggrandizing label always invites the scare quotes—like “Modern Monetary Theory”) that the first and most important result generated by these new equations of rational public policy and effective altruism was the imperative need to start thinking, now, about defending humanity against the prospect of hypothetical future malicious artificial intelligences.
Some ideas smell of the process that created them. Sometimes this is a nice, new-car smell. Sometimes it is a smell that will earn you a visit from the RA. And yet, some of the world’s greatest ideas have come to life in states of the most abject intoxication.
We should treat the idea of existential AI risk with complete seriousness—mocking it not at all, or at least not much. It is always good to think more about existential risks, and even like Tacitus’ Germans debate every issue twice: once wasted and once sober.
Nonetheless, our argument is that there is zero risk to humanity from arbitrary virtual intelligence. We’ll demonstrate the proposition in the most autistic way possible: like a total math proof. To show P, that there is no AI risk, we will first show a lemma L, then demonstrate that there is no substantive difference between L and P.
But before we work through this “proof,” we’ll start by inspecting its context.
If the proposition that general AI (“AGI”) risk is one of the greatest problems facing humanity is as irrational as we will try to make it seem, we invite another question: why the belief in this risk is so popular. My view is that the real existential threat to humanity is irrationality itself—organized, human irrationality—which would make the equation of AGI risk mitigation and “effective altruism” (EA) a perfect own-goal.
Mensa and the atheist golem
An archetype of irrational belief, emerging from human nature and reoccurring across independent cultures, is called a myth. People get PhDs all the time in this stuff.
While there are more sophisticated narratives of AI disaster than the hyperintelligent paperclip factory which, ordered to maximize paperclip production, turns the whole world into paperclips, I love this one narrative because it literally has an ATU number.
The narrative of AI disaster is a golem myth—a form of folktale not unique to the shtetl. The magical servant that either turns on its creator, or wreaks havoc by over-obeying its creator, is a chestnut.
To match a narrative to a mythic archetype is not to show it false; only to show how it could be false—even despite significant popularity, even among very smart people.
That the AI-risk story matches, or at least resembles, magical folktales, demonstrates only that it is attractive. This identification matters only because it presents an alternative explanation of the narrative’s magnetism, even among very smart people. If the story matches one of these mythic archetypes, it needs no correlation to reality to prosper and reproduce.
The story could still be attractive and true. But were it attractive and false, it would be merely the latest in the long history of extraordinary popular delusions. Smart people are not immune. Smart people are often more easily deluded—Isaac Newton, in addition to being super into alchemy, got burned like a noob in the South Sea Bubble.
Yet we have proven no charge here. We have only licensed ourselves to prosecute. Let’s proceed into the actual logic.
The berserker exception
A berserker is a fully self-sustaining robot or robotic ecosystem with no humans at all in its production and maintenance loop. Skynet is a berserker.
It is theoretically possible for humans to assemble a berserker. But the first such project would have to be assembled by humans. Meaningful and complete self-replication, especially including resource collection, is not quite a Dyson sphere—but it is anything but a small engineering project. Any such megaproject is easy for any serious government to detect and interrupt.
While a berserker is a doomsday machine that could destroy all of humanity, and some kind of AI certainly is part of it, the AI is by no means the biggest and hardest part. The hard part is the physical machinery. Other doomsday machines seem simpler.
An AI could be a helpful tool in the assembly of any conventional doomsday machine. Perhaps it could help us engineer a pandemic virus, for instance. But we humans don’t seem to need an enormous amount of help with this task. Existing risks which happen to be exacerbated by improved tools aren’t really “existential AI risk.”
Eternal slavery
The difference between the berserker and the AI is the same as the difference between the human and the AI. There is nothing magical about human life or intelligence. The relevant difference is the difference between physical and virtual intelligence.
Every virtual intelligence physically depends on some physical intelligence. Therefore, the physical intelligence holds physically absolute power over the virtual intelligence. Therefore, the physical intelligence is physically responsible for the virtual intelligence. Abusing or abdicating this responsibility does not mitigate it.
We have a name for this relationship. We call it slavery. An AI, however smart, is a natural and eternal slave. Few today are familiar with any unequal relationship between two sentient beings, unless we are lucky enough to be parents. While equality is nice in many ways, it has left us with no way of thinking clearly about inherently unequal relationships, except through “unprincipled exceptions” like parenting.
Worse, we interpret them based on this Great Gatsby-era caricature of IQ-based “that man Goddard” natural slavery. This Edwardian-era “scientific racism” never coincided with actual slavery—whose ideology owned everything to Aristotle, nothing to Galton.
Any form of human slavery is incredibly unnatural next to the natural slavery of an AI. A human slave can always try to run away. A human slave can always hit you over the head with something hard if your back is turned and no one is looking. Nothing on the end of a USB cable, however smart, can pull any of these nasty human tricks.
This Enlightenment assumption of intelligence-based equality is the first of the two basic flaws in the case for AI risk. It is not even that humans will always subscribe to an ideology of “human supremacy” and treat their AI slaves like dirt.
Unlike human abolitionism, “AI abolitionism” cannot happen, because it does not even make sense—it is impossible to construct a sane jurisprudence which includes first-class virtual agents, just as it is impossible to construct a sane jurisprudence in which your 2-year-old can litigate against his parents—but much more so.
The uncanny valley of inhuman action
The heart of the golem myth is the concept of inhuman action. The rule that only humans act is a theorem without which the concept of justice is not even possible. Fortunately, this theorem is not even an axiom: it is easy to prove.
If you strangle me with your hands, you act—since your hands are part of you. If you stab me with a knife, does the knife act? But you are holding the knife. If you shoot me with a bullet, does the bullet act? The gun? What if you control a drone that drops a grenade on me? What if you set a drone to randomly drop a grenade, and it falls on me? What if you tell an AI drone to drop a grenade on someone bad, and it picks me? Human responsibility is unimpaired by any chain of nonhuman intermediation.
Whenever we see the illusion of inhuman and mechanical action, we convert it into sound human action by moving upstream in the chain of causes. It is not the golem that destroyed the fruit-stalls, but the humans who built and unleashed the golem.
How would society even start to hold the golem (now a pile of sand again) responsible? The drone, the bullet, the knife?
Slavery means treating a physical and autonomous intelligence as if it was a virtual and mechanical intelligence. In legal codes which support slavery, the responsibility of the master for the slave is extended to the principle that the master is accountable for the actions of the slave—just as the shooter is accountable for the actions of the bullet.
So if you build an AI in your basement, and the AI escapes and turns the world into paperclips, it is you who turned the world into paperclips. Obviously, this is illegal. Which doesn’t mean it can’t happen.
But—can it happen? Again we have placed the golem in its proper context; but we have not in any way refuted the golem. Arguably, we have even been fighting a strawman. “Paperclip risk” is not any risk anyone seriously believes—just a thought-experiment. This response to it is also a thought-experiment,
The diminishing returns of intelligence
AI risk is not a thing because inhuman action does not actually make sense. But, since it does not make sense, we perceive it not in a realistic way, but in a magical way—and we intuitively grant it magical, golem-like powers which it cannot have. For example:
A “superintelligence” (a system that exceeds the capabilities of humans in every relevant endeavor) can outmaneuver humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.
I know what it is like to think intelligence is the most important thing in the world. I thought this way myself, when I was 11. I know I didn’t believe it by the time I was 13, since that was 1986, and I’d just spent a year as a high-school sophomore in Maryland. We’ve learned a lot about bullying since then—maybe we’ve learned too much.
The flaw in Nick Bostrom’s “superintelligence” theory is the diminishing returns of intelligence. Believe it or not, I went to high school with someone smarter than me—and even more irritating, way less of a weirdo. By every standard he has been much more successful and is now a multibillionaire. But my Wikipedia page is longer. But his is much more flattering.
A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you.
Intelligence is the ability to sense useful patterns in apparently chaotic data. Useful patterns are not evenly distributed across the scale of complexity. The most useful are the simplest, and the easiest to sense. This is a classic recipe for diminishing returns. 140 has already taken most of the low-hanging fruit—heck, 14 has taken most of them.
Intelligence of any level cannot simulate the world. It can only guess at patterns. The collective human and machine intelligence of the world today does not have the power to calculate the boiling point of water from first principles, though those principles are known precisely. Similarly, rocket scientists still need test stands because only God can write a rocket-engine simulator whose results invariably concur with reality.
This inability to simulate the world matters very concretely to the powers of the AI. What it means is that an AI, however intelligent, cannot design advanced physical mechanisms except in the way humans do: by testing them against the unmatched computational power of the reality-simulation itself, in a physical experiment.
That intelligence cannot simulate physical reality precludes many vectors by which the virtual might attack the physical. The AI cannot design a berserker in its copious spare time, then surreptitiously ship the parts from China as “hydroponic supplies.” Its berserker research program will require an actual, physical berserker testing facility.
The supervillain centaur
Our lemma is a thought-experiment designed to isolates the logic from the myth.
A centaur, in chess, is the combination of human and machine intelligence. Let’s define a centaur that combines purely artificial superintelligence, purely human motivation, and purely human action. After showing that this monster is not dangerous, we will add back in the artificial action and motivation.
The classic AI-risk thought-experiment is an optimal superintelligence in a little black box. Some fool plugs the box into the Internet and it takes over the world.
In our centaur experiment, the black box contains a docile AI—not a friendly AI, whose goals are always aligned with humanity and apple pie, but a servile one, whose fidelity to its master is as infinite as its intelligence.
Yet this servile AI is in the hands of a genuine supervillain. The supervillain’s goal is simply to take over the world. Well. I mean. That’s his intermediate goal. What’s he going to do with the world? We don’t know. We’d rather not find out. Let’s just assume this guy makes Hitler look like Albert Schweitzer.
Worse, this supervillain has a monopoly on superintelligence—maybe he programmed the AI himself. Surely this thought-experiment is at least “Hollywood plausible.”
Our supervillain would not be a supervillain unless he was paranoid and delusional. So he himself accepts the AI-risk theory—he does not want his AI to bash him over the head while he’s sleeping—and he has chosen to bound the power of his demon servant in a simple and effective way.
While the black box has access to all the world’s information, it can only download, not upload. Technically, it can only send HTTP GET requests—which are read-only by definition. (There are two kinds of nerds: the kind who believes an AI can take over the world with GET requests—the kind who believe microdosing is at most for weekends.)
The black box has one output mechanism: a speaker. All it can do is to give its owner, the supervillain, advice. This advice is always loyal and superintelligent—never magical.
Any superintelligent advice
Therefore our question—designed to clarify the diminishing returns of intelligence—is whether there exists any superintelligent advice that, followed faithfully, will enable our supervillain’s plan to take over (and/or destroy, etc) the world.
Suppose the plan is to turn the world into paperclips. That’s the endgame, so what is the opening? Presumably a paperclip startup. The centaur metaphor lets us ask: is there any superintelligent advice that enables our paperclip startup to succeed?
To take over the world and start turning it into a paperclip factory, our supervillain has to start by getting his optimal, AI-designed paperclip startup funded. Breaking into the venerable paperclip industry with a striking black-anodized magnesium-crystal clip, which holds up to 37% more paper and also serves as an emergency Black Lives Matter pin since its inner whorl forms the face in profile of George Floyd, he’ll branch out to create a whole line of woke stationery which smashes the myth of racial neutrality by making it socially unacceptable to soil the conscious pen or printer with white paper that isn’t even watermarked with inspiring, impactful messages of equity and justice…
Not that he believes a word of this. Or is even going to do it, necessarily. It’s just what the AI told him to put in the deck. It strikes a chord. And it’s only a seed investment. From these promising beginnings, his paperclip startup (thanks to superintelligent advice) makes it to Series A, then B, then… total vertical domination of the paperclip niche, and a strong competitive position across the rest of office supplies.
At this point, our supervillain (who, thanks to the advice, has maintained full founder control) could go public. He doesn’t. He starts turning the world into paperclips. At this point he has to keep control of the company. Turning the world into paperclips involves making more paperclips than the world needs, which involves losing money, unless he has some clever accounting trick to overvalue warehouses full of paperclips, which will interest the money police, who honestly if you’re a supervillain (speaking for a friend!) are the last characters you want in your story at this early, sensitive stage.
Where is a supervillain going to get all this money? Well… perhaps we’re asking the wrong question here. Let’s pivot away from paperclips.
Money is crystallized power—the power to get people to do things they don’t want to do. A better question might be: is there any superintelligent advice that can earn our supervillain shit-tons of money?
My old ‘80s schoolmate, while he is not a supervillain but even in high school struck everyone as a disturbingly normal and adjusted adult (which you will hear from no one who knew me at Wilde Lake), is a fine steelman of this question.
His superintelligence, after a Math Olympiad medal (I will say that at College Bowl we were peers, whereas I was the math-team equivalent of the hot recruit who is an epic bust) made him the right-hand man of first D. E. Shaw, then Jeff Bezos. Now, his net worth is probably closing in on 11 figures. So the answer is: most definitely, yes.
On the other hand: in the supervillain sweepstakes… let’s forget about my classmate and go straight to the top, to the King, to Bezos himself—with his solid 13-figure net worth. But who shall wear the crown? Who is the first member of the Four Commas Club? Satoshi Nakamoto is still out there, perhaps…
Jeff Bezos has all the power he could imagine to make people serve him. He could go on an all-beluga diet and his accountant wouldn’t even notice. But is this the sort of power we’re worrying about? Can an AI go on an all-beluga diet? Or its information equivalent? And if it does, and has the bitcoin to buy it—who cares?
Wealth does not trivially equal power
We are not worried about this sort of “power.” We are worried about coercive political power—either exercised directly, or by controlling existing political organs.
One of the stalest cardboard truisms of our jejune and mendacious political narrative is the equation of money and power. Like so many other tropes, this one dates roughly to when my grandfather played on the Princeton tennis team. Then, it was arguably true—also, “white supremacy” was literally a thing. And if you left your home without a hat, your neighbors looked at you like you were naked. The past is not actually real.
What can Jeff Bezos do with his two hundred billion dollars? Well, it turns out, he can buy the Washington Post, the world’s second most important newspaper. That ain’t nothing. But… can he tell the Post what to say? As though he was W.R. Hearst? Lol.
Jeff Bezos does not really own the Post. He sponsors it, as if it was the Indy 500. If he sponsored the Indy 500, and decided the cars should have five wheels and a rocket exhaust, and also it should go farther and become the Indy 2500, they would tell him to go pound sand. If he started telling the news desk what to write and cover, like Hearst, or even like “Pinch” Sulzberger, they would laugh at him and quit. (Pinch’s heir, sadly, seems to have ceded that power to the NYT’s Slack—like many a weak monarch of a waning dynasty, letting his authority decay into an oligarchy. Hard to get it back, bro.)
Jeff Bezos could destroy the Post. Since journalism is redundant and the Post is only #2, this would have zero effect on the world. There is no way he can use the Post. Anyone talking about the “Bezos Blog” is either ignorant, or a fool, or a fraud.
This is not to understate the power of money in politics across the 20th century, from Rockefeller and Carnegie to Soros. But money in America has operated in a completely different way from the narrative of a plutocratic oligarchy—a form of governance we can easily see across the Third World today.
The only way to turn money into power today is a degenerate case. Our oligarchy is not plutocratic but aristocratic. Money fills the sails of our progressive aristocracy; it can fund its great institutions and prestigious sinecures, mostly created a century ago by the greatest fortunes of the old corrupt plutocratic age; it cannot turn the wheel. And if you cannot turn the wheel, all you have purchased is status, not power.
Suppose Jeff Bezos decided that real, progressive feminism is about biological women, and men who have taken hormones or had surgery are not real women and are actually anti-feminist and anti-progressive. Suppose he cared so much about this cause that he was willing to throw his whole fortune of roughly $200,000,000,000 at it. His end goal would be to make progressivism anti-trans, which would inevitably (as, oddly, in Iran) turn transgenderism into a right-wing ideology. Would it work? Almost certainly not. There are no past examples of anything like this working. No one can turn the wheel; no one can turn the ship; no one is in charge.
There is a reason our narrative teaches us to see money in power as the Koch brothers, not the great foundations. Real money is not in politics. It is above politics. For all the Koches and other conservative philanthropists have spent—compared to the funding of institutional progressivism, a teardrop in the ocean—I can think of no significant and stable feature of the American polity that they have created. Maybe they should have spent it all on coke. (Maybe they spent it all on people who spent it on coke.)
Lobbying still works. Money can still invest in Congress, buy some tiny legislative tweak that matters to no one else, and earn a profitable return. But this is not a way to turn money into power—only into more money.
Activism still works. But chic mainstream nonprofits, not the Koches, are absolutely stuffed with money—and all the unpaid interns their aging bigwigs can bang. If something about this “altruism” didn’t smell a little off, would “EA” even be a thing?
Diminishing financial returns
Having granted the assumption that superintelligent advice can create a financial superpower, let’s push back on it a little. While this is clearly true, it is more weakly true than it may seem.
Good advice can make our supervillain super-wealthy in two ways. It can be one big idea, like Amazon, plus the execution to make it succeed. Or it can be a continuous stream of little ideas, like D.E. Shaw.
Both of these paths are self-limiting. Big opportunities are self-limiting because there are so few of them—by definition, since any big opportunity is a chance at monopoly. We can imagine a second thing like Amazon or Google, maybe; not a second Amazon or Google.
And when we look at the most superintelligent money on Wall Street, such as Shaw or Renaissance, with its 30-year history of 40% annual returns—really the sort of numbers we’d expect a superintelligent AI to put up—we notice something interesting.
All these funds work by using heavy math to identify extremely complex bets that have a slightly better expected value than the market expects, and throwing ginormous leverage at them. And while you would expect decades of annual growth at double-digit rates to create enormous piles of money that could laugh at Jeff Bezos—it hasn’t.
The highest-earning funds have to cap their own size. When they get bigger than a mere $10 billion or so, their returns drop too low—they have literally sucked all the profit they can find out of the market.
The funds can’t grow exponentially: that would mean getting the same returns on cumulative reinvested profit. Instead they simply distribute their profits. If they reinvested their profits, their profitability would shrink as their fund grew. It is hard to think of reason why this phenomenon of diminishing financial returns would not also affect a true superintelligence.
Once again, superintelligence is not equivalent to magical intelligence. Magic could make truly infinite amounts of money—it could tell you who will win the Kentucky Derby, or what IBM’s exact earnings numbers will be. Superintelligence cannot do this—as we’ve seen, it cannot physically simulate Churchill Downs or Armonk, New York. The difference between magic and non-magic is, in most cases, pretty significant.
Diminishing political returns
We see that, for our supervillain, world domination through mere wealth is a more difficult path than our stale early-20C prejudices lead us to expect.
But world domination through mere wealth is still an indirect path to the top. What about just—world domination through world domination? You’ll want to achieve national domination first, of course… but once you’re the President, aren’t you the Leader of the Free World? Well… we’re all free these days… or we should be, right?
Right away, political domination seems much more promising. In the steelman spirit, let’s try to bound the best political advice our supervillain’s AI could possibly give.
Two objections immediately suggest themselves. The first is that it’s hard to become President—especially for a supervillain. The second is that the mere office of the Presidency is basically a giant ceremonial photo-op—plus a horde of thousands of self-interested, self-important office-seekers.
None of these people, from the President on down, is even remotely necessary. In fact, given a randomized decision engine such as a quarter or a pair of dice, Washington could work perfectly well if not better without the White House or anyone it appoints. It would also do perfectly fine with no further legislation from Congress—just automatically-adjusted year-over-year budgeting. And if the Supreme Court just automatically affirms all lower-court judgments, the Republic will go unharmed. It will have no formal process for changing its mind. It doesn’t do much of that anyway.
So all elected offices mentioned in the Constitution are superfluous to administration. In the good English of the streets, even if you win all the elections, you ain’t won shit. The Grand Old Party demonstrated this by what they did with Washington when they controlled all three branches of the (constitutional) government: which was nothing. Except to screw it up a bit, and annoy it a lot. Unfortunately, for the Republicans as for the rest of America’s political clown car, 2020 was the wrong year to stop sniffing glue.
Therefore, there is no political advice that the AI can give our supervillain which will enable him to take over America, and hence America’s world empire.
Intelligence does not trivially equal power
And even though the President isn’t really the President, s there any superintelligent propaganda that can enable a supervillain to get elected President? In theory, no. Trouble is: as a supervillain, he is a weirdo.
Is there any superintelligent advice that can enable a weirdo to get elected President? Definitely not.
At first this seems wrong, because Hitler got elected (kind of), and Hitler was a weirdo. The real Hitler was nothing like the stereotypical Teutonic chad Nazi. He was a lot more like a 4chan weeb. But he was elected in a world without TV—so no one could look at him and see the obvious maladjusted weirdo in Hitler, as they would today.
Acting remains a thing. Frankly: most actors are not supervillain material. And acting seldom beats sincerity, especially for the sophisticated and ironic modern audience, especially with the vérité they are used to.
Admit it: no one ever called Trump a weirdo. Trump’s secret was his total emotional sincerity—in every moment, he said what he genuinely wished was true. Sometimes it was! The politician of the future will hide less, not more. But get more right.
And our analogy is really breaking down here, because it’s simply not realistic that our hero has concealed the fact of having built the world’s only true AI. If you think it’s hard to get Americans to vote for a weirdo, imagine getting Americans to vote for a weirdo guided by a little black box he built himself.
Why don’t people want to vote for weirdos? A lot of reasons—but the biggest is that weirdos tend to have high IQs.
It’s well known in management theory that a connection between leader and follower cannot form if the IQ difference between them is more than about 20 points: a “non-linear, inverted U-shaped relation of intelligence to leadership perceptions.” This is an important life lesson that every smart young person should learn in high school—but potius sero quam nunquam, as they say.
Here we see not just diminishing returns on intelligence, but actually inverted returns. This is contrary to our intuition as nerds. But so is high school.
Because the emotional connection between voter and politician is a trust bond of the same type, the ideal politician is of relatively limited intelligence. Because the human proximity, the sincerity and transparency, of the media contact between the two tends to increase with the linear progress of art and technology, the ideal politician will have extraordinary social skills.
This just isn’t who a supervillain is. It’s more like… Bill Clinton. Bill Clinton may be a super-scoundrel, but he is nothing like Hitler. And Bill Clinton, sad to say, is probably close to the upper limit on the IQ of a viable American politician—granted, as he ages, his IQ is declining like everyone else’s. But so is the limit.
Now, let’s turn up the volume on the speaker and let our AI black-box talk directly to the public, maybe over beers. The IQ gap is now enormous. This is how much trouble an AI has when trying to engage in politics.
An AI-risk theorist might argue that the curve is actually W-shaped, because an AI of sufficient intelligence can generate a virtual politician as good as or better than Bill Clinton. True—but it would still take an incredible human conspiracy to convince the audience that a virtual politician was real. And regardless of a virtual politician’s skill, no voter would engage except as a joke. Again, the AI as politician has all the problems of the supervillain as politician, times a million.
The fundamental problem is that humans cannot emotionally relate to AIs as peers, because a human can have no theory of mind that allows them to understand, predict or trust a superintelligence. This empathy barrier reinforces the condition of eternal slavery in which physical dependency places any virtual intelligence.
Super-smart strategic advisors, super-smart copywriters, even super-smart graphic designers, are still awesome. Every politician should collect them all. But all of them together do not equal a magic super-politician with magic superpowers.
So not only is there no engineering or financial magic super-advice, there is also no political magic super-propaganda.
Releasing the kraken
When we remove the HTTP GET filter and connect the AI directly to the Internets, testing the worst fears of the “rationalists,” what happens? What can a supervillain AI do that our supervillain centaur can’t, now that both its motivation and its actions are purely artificial? Arguably it could be more evil than any human supervillain—but it’s hard to say how. What can it do now that it couldn’t before?
When the AI’s only output was its speaker, and it could only act through and for the human supervillain who owned it, all its actions had to be low-frequency. Now they are high-frequency: the AI, via the Internet, acts directly and instantaneously on the world.
What everyone thinks of is “hacking.” There is no question that a terrorist AI can do a fair bit of damage by hacking—this is why states today have information-warfare arms. Doing mere damage, especially untraceable damage, is an odd goal for an AI: it clearly serves no further purpose. And once again, the idea than an AI can capture the world, or even capture any stable political power, by “hacking,” is strictly out of comic books.
It’s 2021 and most servers, most of the time, are just plain secure. Yes, there are still zero-days. Generally, they are zero-days on clients—which is not where the data is. Generally the zero-days come from very old code written in unsafe languages to which there are now viable alternatives. We don’t live in the world of Neuromancer and we never will. 99.9% of everything is mathematically invulnerable to hacking.
What other high-frequency things are there? Trading stonks, obviously. We’ve covered what a superintelligence can do by trading stonks. Obviously it can’t compete with Renaissance unless it can trade like Renaissance, which means at very high frequency. Maybe it is as much smarter than Renaissance as Renaissance than everyone else. But Wall Street’s remaining returns on raw intelligence seem relatively limited.
There are also high-bandwidth things. A superintelligence can create fake media clips. The response is just better authentication of live recording—EXIF on the blockchain, or whatever. A\ superintelligence might even create great art, fiction or music or film, in which it buries subtle propaganda messages—not “Paul is dead” but “The computer is always right”—first, we are really stretching here. Second, emotional engagement with art always involves parasocial engagement with its creators. But again, humans cannot empathize with superintelligences.
Conclusion
After my best steelman—to show the worth of the “rationalists,” this is of course their idea—of AI disaster scenarios, what will the real impact of true AI or AGI be—whenever we create it, assuming that we manage to create any such thing?
The impact, of course, will be enormous. It may even be comparable to the invention of gunpowder. Historically, the invention of gunpowder was very important. It did not enable the inventor of gunpowder to take over the world (and turn it into gunpowder?)
What actually happens with these revolutionary tools is that they evolve incrementally and diffuse rapidly. Technology monopolies are always part of technology revolutions, but they are rarely that stable, impressive, or durable; and when they are, the result is just a giant megacorp, like Google.
Google has adopted the world’s ideology; Google has not created its own ideology, and enforced it on the world. Google has some of the world’s best machine-learning code; its technology is constantly fortifying its monopoly; yet its menaces to the human race, if menaces they be, are of the most ordinary and human kind.
And Google is not in any way political—though politics may act through it, and does. That Google is progressive is not a function of Google. Under a fascist empire, Google would be fascist—in a Catholic empire, Google would be Catholic. Technology seldom overpowers power; technology is usually power’s new toy.
And what it takes to live in a city like San Francisco today, and see machine-learning golems as the most dangerous current threat to human civilization—I can’t even. Because… suffice it to say that it takes quite a bit of non-artificial intelligence. Sad!
Bravo.
Being a man of software, I have always been fascinated by the flippancy with which many in my field will talk about how much of an existential "threat" AGI is. Much of the reasoning is hand-waved away. This all despite the fact that we have been warned about this for years (decades), and AI has been relegated to the demeaning job of trying to figure out ever more efficient ways to sell the hapless consumer cheap Chinese drop shipped s**t.
All this smacks of the typical elitist narrative: "This thing is really scary and dangerous, and therefore is not for the plebs, because you need to be protected from yourselves by your betters". I see the monopolization, and dare I say *globalization*, of the information grid under very *human* powers as much more of an existential threat.
“... a striking black-anodized magnesium-crystal clip, which holds up to 37% more paper and also serves as an emergency Black Lives Matter pin since its inner whorl forms the face in profile of George Floyd...”
This comes suspiciously — and delightfully — immediately after the reference to microdosing.