60 Comments

Meant to leave this on your previous post but I might as well leave it here:

As a recovering rationalist, I have a particular disdain for utilitarianism. Put simply: utilitarianism is a scam used by the person who decrees what the utility function is, to silence dissent towards their leadership.

Think about it. At the end of the day, what is utilitarianism? Utilitarianism is a philosophy that says "things with more utility are better than things with less utility". Ok, sure, fine, that's cool. What is utility? Utility is good things that make people happy, for some philosophical definition of happy.

Ok, sure, fine, that's cool. Who decides what that definition is? When two people disagree, who decides which one is right? In a normal conflict of values, the decision function is "power", as you've written about at length. In a utilitarian context? Well, in my experience, it mostly looks like the same kind of social mind games that the woke pull. And if utilitarian ethics reduces to 'social status manipulation games' whenever there's a conflict, then how is that not just tyranny with extra steps?

You've actually hit on one of my favourite examples.

> Personally, to save one of my own children, I might very well condemn all of Central Asia to death by trolley. To save them both I could throw in Brazil. I must be a very ineffective altruist, I’m afraid.

Every instance of utilitarianism I've ever seen, seems to value every human life equally and interchangeably. Utilitarianism seems to always say that one life in my neighbourhood is exactly equal to one life off in some far-flung jungle. Oddly, I can't seem to get to that point from "more utility is better", and yet somehow they always do

Here's my proposition: human lives are not equal, and there is more utility in saving lives that matter more, relative to saving lives that matter less. Not only are human lives not equal, but human lives don't even have an objective, concrete, singular value. The evaluated value of a human life depends on the social distance from the evaluator. My family's lives are more valuable than other peoples' lives, _to me_. This can be true even while recognizing at the same time that their lives are _not_ more valuable to other people.

"True" Utilitarianism would be agnostic on the question of whether or not my particular metric for utility is correct. It would simply say, taking my metric as an axiom, it is good to maximize utility. And yet, for some reason, if you were to, for example, go to an EA meeting and say some variant of "we all live in America. Fuck Africa. Who gives a shit if bushmen die of malaria. One American is worth a hundred of them, and so the greatest utility is to stop wasting money over there and spend it over here"... try it and let me know how it goes.

There is no principled, objective, _utilitarian_ way to make the judgement that my utility function is "wrong" but the "maximize lives saved" function is "right". None. It doesn't exist. But utilitarians will pretend it does exist, and invoke 'utility' to silence all viewpoints to the contrary. Ergo, utilitarianism is just a scam used by whomever arbitrarily decided what utility function we're using, to silence anyone who thinks a different function would be better

Expand full comment

"The math is definitely telling us, I feel, to own the libs." I LOLed.

Expand full comment

So much passion in the replies here!

And more than a little defensiveness.

EA makes a lot of people uncomfortable because it highlights just how little serious thought they've given to (1) the ethical implications of other sentient lives being sentient, and (2) future/potential sentient lives being ethically important, and vastly out-numbering present lives.

I'm well aware that we're evolved to treat 99.9999+% of other sentient beings as non-sentient and unworthy of moral concern. I've been teaching evolutionary psychology since 1990. Parenting, nepotism, tribalism, and anthropocentrism run deep, obviously, and for good adaptive reasons. They're Lindy. Time-tested and battle-proven. But that doesn't mean they're ethical in any principled or aspirational way.

If natural = good, then modern leftist woke identity politics is also good, because runaway virtue-signaling, self-righteous moral panics, and performative sentimental collectivism are based on moral instincts that also run deep. But these moral instincts aren't always good. They're often just short-sighted, self-serving, and idiotic. The naturalistic fallacy can't adjudicate what kinds of morals and political aims are worth adopting, now, given modern technological civilizations and Darwinian self-awareness.

In case anybody is curious to learn a little more about Effective Altruism as it's actually practiced -- rather than some of the straw man portrayals here -- the course syllabus for my 'Psychology of Effective Altruism' course is here, including an extensive reading list and links to some good videos: https://www.primalpoly.com/s/syllabus-EA-2020-spring-final.docx

Expand full comment

"EAs understand that most people are not, and may never be, capable of rational utilitarian reasoning."

I am sorry, but these people are retarded.

There is nothing "irrational" about caring vastly more about those close to you than other people. In fact, such behaviour, prioritizing the well being of those who are most closely related to us is profoundly rational from a genetic and evolutionary point of view. Have these people read Dawkins at least? Caring about unrelated people as much as about related people ends up with your genes being bred out of the genepool. Which is why we have billions and billions of people alive with obscenely xenophobic and tribalistic behaviour and very, very few nerds with effectively altruistic behaviour. These people are not even meant to be, they are nature's accidents, because nature abhors such blind universalist altruism.

You should make a post "Effective Altruism considered retarded" like you did a decade ago about Futarchy.

But on the general topic of long-term philosophy - it is not about ethics. Every single one of the little sects mentioned by Mr. Miller (NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists. Scholars. Crypto dudes. Ornery post-libertarians. Post-Rationalists.) do indeed have a long-term vision. They ALL believe they want to maximize long-term value. Even the libs, as you mention, Curtis. And yet they disagree so much with regards to how that long-term vision looks like. Why? Aren't they all trying to maximize "human value"?

When you drill deep enough into ethics, you realize it is all about aesthetics. All of these little sects simply have their own different views on what they consider "beautiful", what they consider a "good life" and of course - "human value". And it is all about aesthetic preference. This is not aesthetics in the narrow craft sense as in what music genre or architectural style you like. This is "broad" aesthetics - the aesthetics of human life, the aesthetics of being.

Crypto-anarchists have one view on life aesthetics. Communists have the near opposite, white nationalists are vastly different from both, and the libs are very different in their live aesthetics from all of them combined.

The history of human civilizations is different peoples taking turns enforcing their own views of aesthetics on one another.

Expand full comment

(Of course, every authentically great religious leader has agreed with me at heart. Christ’s Sermon on the Mount is pure agreement with me. If you don’t agree with me, you'll hate Christianity. And, yes, there's an active sub-sub-culture of Christians who agree with me.)

Expand full comment

Engaging with "EA" and adjacent claptrap as if they were stand-alone ideas (like, say, a purported proof of the Poincaré conjecture) is a catastrophic mistake.

Political ideas do not stand alone. More than anything it matters *who* is pushing something. And it isn't any kind of secret that "EA", "rationalism", etc. are effectively a rerun of the Comintern -- with SV oligarchs (and their pet Yudkowskys) in the role of the Party.

Biting a hook is a bad move for a fish, observe, *not* because of any objective defect of *the hook*.

EAism provides the familiar Christian toolkit for committing atrocities with a clear conscience (with "moral imperative", even), with ready justifications for arbitrarily dystopian reichs. As well as stoking the narcissistic "we", with a rich selection of "we must..."s satisfying for tenured professors and redditards alike.

But this is simply a description of the mechanisms whereby the ethics-flavoured rootkit wrecks your brain. More interesting is to look at *who* is trying to install it in your head; and the *why* quickly becomes apparent.

Categorical rejection of ideas on account of their source is not, as the "enlightened" sociopaths want you to believe, an act of folly. Rather it is your first and often only line of defense against a caste of parasites who see your mind as their plaything, your will and personal sense of right and wrong as an obstacle, and (openly!) equate you with a broiler chicken.

Expand full comment

Main thing why I've been always skeptical of the EA is that I haven't seen them attacking leftism.

All kinds of lefties have caused huge problems all over the world. From the old hardcore worker commies to the current woke libtards, they all create misery and ruin societies.

The root cause for a lot of suffering in the world is leftism. That's why reducing its impact should be one of the main priorities of effective altruism. But for some reason, it's not...

Expand full comment

"The first is underestimating how much attention the EA subculture already gives to these psychological issues—the mismatch between human nature as it is, and how an ‘ideal utilitarian’ would ideally act. (...) We have no illusions that people are naturally equipped to be good at utilitarian reasoning or EA. We don't even think that most people could ever get very good at it."

And still, this manages to miss the issue, even while it explicitly states it (albeit reversed): it's not humans who aren't good at utilitarian reasoning, it's rather those good at utilitarian reasoning that aren't good at "human".

Sure, EA advocates are humans in the sense that they're like you and me (especially the nerdier of us), in that they have rights, in that if you prick them they will bleed, and so on. Some would probably even ace a Voight-Kampff. But they aren't quite there in several ways that matter.

They're part of a subset of people - I'll include me - that struggle to human (what with autism, being a nerd, late stage capitalism, and so on). The difference is that EAs, along with a number of other factions, consider this an advantage, and make a project out of it. Kind of like incels making a project out of being unable to get laid.

The problem is that by not getting humans in general, they also not get what humans want, what makes humans happier (heck, not even what makes them mentally healthy). They surely don't get what makes humans tick, and they can barely understand societies (at best they can analyze them, and make some abstract maps, which they mistake for the territorry).

EA recipes are as if an alien race with a totally different civilization, habbits, priorities, and values comes to teach humans what to do "to be happy". It doesn't matter if they're more intelligent. It doesn't even matter if their recipes can work in a human context, or at least be enforced.

Let's even say they're better than "normies". They're, say, to most, what a Vulcan is to a human. Still, Vulcans teaching humans what they should do and how to be happy is a recipe for disaster, that only a nerd with illusions of grandeur and/or revenge fantasies would consider viable. Vulcans don't get humans. Humans get humans.

The best Vulcans, a minority, can do, is to try to learn to human in a human society. Humans don't want and don't need their advice (nor will they listen to it. Kind of like those seeing don't need a blind man's advice on how to paint - or even what color their house should be.

Expand full comment

Contra Geoffrey on EA:

There is a general understanding in our ever-so-scientific culture that objectivity equals good and subjectivity equals bad. You yourself have written plenty about “scientific public policy”, the Cathedral, etc. I feel that this age sorely needs some Kierkegaardian subjectivity, the realization that objective relations are for objects and subjective relations are for subjects. Objectively speaking, one wife is less than ten tribesmen—but to me, subjectively, one wife is infinitely greater. And that’s a good thing.

Geoffrey writes:

“Christ’s Sermon on the Mount is pure sentientism. If you don’t like sentientism, you'll hate Christianity.”

As a Christian, or at least a Mormon if we don’t count as that, I can confidently say that the second commandment in the law is “thou shalt love thy neighbor as thyself.” The words “thy neighbor” are significant. Christ did not tell me to love ten tribesmen as I do my wife—on one hand are strangers, on the other a sort of super-neighbor, close both geographically, emotionally, and familiarly. The true divine wisdom is to love one’s neighbor. Of course, neighborliness takes many forms; Christ gives the parable of the good Samaritan to remind us that geographic neighbors have a claim on our goodwill even if they are not our ethnic or religious neighbors. But I digress.

The experience of a fly is real (imo), but the fly is hardly my neighbor. My duties extend in concentric growing circles. First is my duty to my own center, that is, to God. Second is to myself and my wife, who is bone of my bone and flesh of my flesh. Then to my children (first is on the way) and family, then my community, nation, and finally the whole human race. Maybe outside that is a another circle for higher mammals and then another for vertebrates, but at this point the circle has grown so diffuse that it is almost meaningless and simple hunger on the part of an inner circle’s members takes precedence over their very lives.

I think this is the only human way to live inasmuch as the human is a subject and not an object. Kierkegaard uses the formula “subjectivity is truth”, and I think this well summarizes the whole affair. You write of shared fictions between husband and wife that make the relationship work; this is just an affirmation that their subjective reality is different from the objective reality—and better, more functional, more fulfilling. A “scientific” relationship with another human being probably ends up looking like circling, all very factual, “I feel this right now” in immediacy, makes subjects crazy. The subjective approach is, as you say, entirely natural, the natural instinct of every normal human; it is the teaching of holy writ and entirely in accordance with honest philosophy. It is the WIS to the INT of scientism and objectivity.

Expand full comment

EA sounds like an attempt to apply Enron's accounting style to the world of morality. Counting assumed future gains to offset present day costs.

> If I am in a war against the Nazis, and I see a German helmet in my scope, containing a being which has experiences that matter, can I blow his experiences out through a fist-sized hole in the back of his head? If not, I cannot fight a war and the Nazis win.

Given how much they contributed to rocket technology (a necessary precursor to the future galactic civilization), that's probably not the only objection EA would have to fighting actual (as opposed to redneck) Nazis.

Expand full comment

As QC pointed out in his interview with Eigenrobot (https://lnns.co/Fx1brpdkgTE) the problem with Effective Altruism is that it is essentially Scrupulosity (https://en.wikipedia.org/wiki/Scrupulosity) for atheists.

Expand full comment

For the low cost of $.30 a day, you too can support a child of the elites. For your monthly donation you will recieve writings from the substack writer you choose to save!

Expand full comment

How does EA benefit me and mine? Only question I want answered.

Expand full comment

Maybe Yarvin is secretly the most effective altruist of all... :D

Expand full comment

Im not owning the libs because I enjoy it ok, im owning the libs because we have to save the world. I do also enjoy it though.

Expand full comment

>> Anything that interferes with that long-term vision is, to the best and brightest EAs, odious. We're aware of short-term/long-term tradeoffs in terms of happiness and suffering.

These EAs sure sound like the libs.

Expand full comment