Geoffrey Miller has some feedback on effective altruism:
The first is underestimating how much attention the EA subculture already gives to these psychological issues—the mismatch between human nature as it is, and how an ‘ideal utilitarian’ would ideally act.
EAs are acutely, painfully, relentlessly, exasperatedly aware of the cognitive and emotional biases that shape our behaviors towards others, including our narcissism, nepotism, tribalism, anthropocentrism, moral posturing, moral blindspots, ethical inconsistencies, etc. It's at least half of what we talk about.
We have no illusions that people are naturally equipped to be good at utilitarian reasoning or EA. We don't even think that most people could ever get very good at it.
As a philosopher, I must protest. Cognitive and emotional biases are different. Cognitive biases are errors in logic. They should always be corrected. There is no such thing as an emotional “bias.”
Emotional “biases” are just how you feel when it ain’t how you want yourself to feel. Suppose, in an abstract trolley problem, you would want to want to run over your wife, instead of ten Pashtun tribesmen. Dollars to donuts, given the real choice, you would take out the Pashtuns.
I refuse to believe that anyone, regardless of their IQ, is any good at telling themselves how to feel. Indeed I believe that, the higher your IQ, the better you are at pretending to feel—at wanting to want. It is only a small step to actually acting that false persona. This is exactly what Christopher Lasch meant about the “culture of narcissism”—it is simply accepted that this totally insane process is a normal way of thinking and being.
I would say that instead of telling themselves what to feel, EA people would do better to ask themselves how they actually do feel. How many random Pashtuns equals a wife, a mother, a child?
Personally, to save one of my own children, I might very well condemn all of Central Asia to death by trolley. To save them both I could throw in Brazil. I must be a very ineffective altruist, I’m afraid.
EA has made a self-conscious decision to remain a small, select, elite subculture rather than try to become a mass movement—and it's done so precisely because EAs understand that most people are not, and may never be, capable of rational utilitarian reasoning. We're not trying to launch a populist movement of noisy vegan activists. We're trying to quietly set up a sort of Alternative Cathedral based a little more on reason and evidence, and a little less on the usual hegemonic nonsense.
This kind of ambition is a bold ambition, but it is an ambition which must test itself with a rigor congruent with that ambition. Or, in English, the thing that sucks about all the Alternative Cathedrals is just that they all suck.
Please don’t suck? The best way to avoid sucking is a high tolerance for adversarial discourse. Here is a modest and humble dose of that.
Second, utilitarianism isn’t really that complicated or exotic. It’s not really about maximizing pleasure. It’s about recognizing that all other sentient beings live their lives with a degree of subjective awareness that is often equal to our own (in the case of most other humans), or is a bit simpler, but equally real (in the case of other animal species). That’s it. Other beings experience stuff.
Bugs too? Or are we cutting it at mammals (who have a emotional system like ours)?
Also, doesn’t sentience scale not only between man and the lower animals—we cannot but accept that a man is more sentient than a rat—but within man himself?
If we accept this standard, a professor is worth much more than a strawberry picker, who is worth much more than a trisomy-21 victim. A thousand rats, however, considerably outweigh the professor, in cortical cells alone. In fact, did you know that all major cities have systematic rat-murdering programs?
If you believe your own experience matters to you, you should accept that those other beings have experiences that matter to them. This is a major source of meaning in the world, for people who have any capacity for accepting the truth of it. Everything else about utilitarianism flows from this ‘sentientism.’
This feels a little distant from the “utilitarianism” of Bentham and Mill, but ok. Note that again (“you should accept”) you are telling yourself how to feel. Actually, I guess, you are telling me how to feel. Or me. Or everyone.
If I am in a war against the Nazis, and I see a German helmet in my scope, containing a being which has experiences that matter, can I blow his experiences out through a fist-sized hole in the back of his head? If not, I cannot fight a war and the Nazis win.
(Of course, every authentically great religious leader has been a sentientist at heart. Christ’s Sermon on the Mount is pure sentientism. If you don’t like sentientism, you'll hate Christianity. And, yes, there's an active sub-sub-culture of Christian EAs.)
Well… if you want to argue that EA is a religion, possibly a Christian sect—maybe even just a rebranded Unitarianism—that’s a bold strategy. Let’s see if it pays off.
I do not know much about Jesus or his milieu, though I recently did watch “Ben-Hur.” Supposedly this was a documentary, though I have my doubts. In any case, I suspect the Christ would be pretty confused about any system of ethics that put farm animals on the same level as, even in the same qualitative category as, human beings.
Indeed, while we do associate Christianity with a flattening of the concentric circles of sympathy that normally lead one to prefer one’s family to a gaggle of Pashtuns, we rarely see an interpretation this hard-line and chiliastic.
Certainly Christians have often been comfortable in indiscriminately slaughtering non-Christians—when the Crusaders took Jerusalem, the streets ran red with sentient blood. And saying you are more Christian than the Crusaders is a little bit like Obama claiming to know more about Islam than ISIS.
Third, EA doesn't lead to a fentanyl collapse, because a fentanyl collapse isn't good for the long term.
Yes, this was not about EA but utilitarianism (in the sense of Bentham and Mill).
Look, EA ten years ago was focused mostly on making charities more efficient, comparing their cost-effectiveness in solving problems like reducing malaria and river blindness in Africa, or figuring out whether direct cash transfers to poor people work better than ‘foreign aid’, or trying to keep billions of animals from suffering too much in factory farms.
These are, um, three rather different things—although the conjunction of the three is disturbing, as it gives the unconscious sense that Africa is, like, a human factory farm. But for the first two, at least, did anyone ever consider—colonialism?
More recently though, the hard core EA people are really focused on long-termism, not just sentientism. They’re much more concerned about existential risks to our species and civilization, than with improving a few charities and nudging a few policies around. We've realized you can do all the good you want for the next 10 generations, but if that doesn't lead to 100,000+ generations of sustainable, galaxy-spanning sentience, you're wasting your time.
What if the worst existential risk to our species and civilization is just—the libs? In that case, the true EA would just be—owning the libs. Food for thought, gentlemen.
The goal isn't to make the poor of Africa and Asia a little less miserable. The goal is a vigorous, meaningful, awesome interstellar civilization that lasts for a very long time. That's how you maximize aggregate, long-term, sentient value, happiness, and meaning. A billion generations of a trillion people each is a billion trillion sentient lives, and if you like life, and you have some objectivity, you should like the math.
The math is definitely telling us, I feel, to own the libs.
Anything that interferes with that long-term vision is, to the best and brightest EAs, odious. We're aware of short-term/long-term tradeoffs in terms of happiness and suffering. We could turn society into a perpetual Burning Man chasing cool drugs, cool sex, coolness uber alles... until the underclass stops delivering water to the grey dusty Playa. EAs know that these short-term temptations are the Great Temptation -- the looming catastrophe that probably explains the ‘Great Filter.’
We know that most intelligent life in the universe evolves to chase proxies for biological fitness (happy drugs, happy movies, happy careerism) rather than long-term fitness itself (good babies & civilizations). And then most intelligent life probably burns itself out in short-term pleasure-seeking, bad governance, fiat money, and grim collapse. We talk about this stuff all the time.
Without realizing that the only solution is the revolt of vitalism, the return of the spirit of the Bronze Age, and the destruction of the cities in fire.
We want human life to survive the real existential threats (like thermonuclear war, bioweapons, and AI), not just the politicized pseudo-threats (like climate change, inequality, or systemic racism).
I don’t think any of these six things are scary at all. Covid wasn’t a bioweapon. All the H-bombs belong in a museum of the 20th century. As for AI…
What if you really did realize that the main threat to human civilization was the libs? Or at least, their lib ideology? What would you do about it?
And EAs are willing to discuss strategies and tactics to sustain civilization over the long term that do not necessarily buy into the classical liberal hegemonic assumptions about how the elites should relate to the masses. You know who else tends to be long-termist in their orientation? NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists. Scholars. Crypto dudes. Ornery post-libertarians. Post-Rationalists. There's a huge intellectual and ethical overlap, and a potential strategic alliance, between NRx and EA. Not to mention a lot of too-school-for-cool people who really should meet each other.
I like to meet everyone! I even like to meet the libs. How else are you going to own them lol… and they think in the long term, too.
Meant to leave this on your previous post but I might as well leave it here:
As a recovering rationalist, I have a particular disdain for utilitarianism. Put simply: utilitarianism is a scam used by the person who decrees what the utility function is, to silence dissent towards their leadership.
Think about it. At the end of the day, what is utilitarianism? Utilitarianism is a philosophy that says "things with more utility are better than things with less utility". Ok, sure, fine, that's cool. What is utility? Utility is good things that make people happy, for some philosophical definition of happy.
Ok, sure, fine, that's cool. Who decides what that definition is? When two people disagree, who decides which one is right? In a normal conflict of values, the decision function is "power", as you've written about at length. In a utilitarian context? Well, in my experience, it mostly looks like the same kind of social mind games that the woke pull. And if utilitarian ethics reduces to 'social status manipulation games' whenever there's a conflict, then how is that not just tyranny with extra steps?
You've actually hit on one of my favourite examples.
> Personally, to save one of my own children, I might very well condemn all of Central Asia to death by trolley. To save them both I could throw in Brazil. I must be a very ineffective altruist, I’m afraid.
Every instance of utilitarianism I've ever seen, seems to value every human life equally and interchangeably. Utilitarianism seems to always say that one life in my neighbourhood is exactly equal to one life off in some far-flung jungle. Oddly, I can't seem to get to that point from "more utility is better", and yet somehow they always do
Here's my proposition: human lives are not equal, and there is more utility in saving lives that matter more, relative to saving lives that matter less. Not only are human lives not equal, but human lives don't even have an objective, concrete, singular value. The evaluated value of a human life depends on the social distance from the evaluator. My family's lives are more valuable than other peoples' lives, _to me_. This can be true even while recognizing at the same time that their lives are _not_ more valuable to other people.
"True" Utilitarianism would be agnostic on the question of whether or not my particular metric for utility is correct. It would simply say, taking my metric as an axiom, it is good to maximize utility. And yet, for some reason, if you were to, for example, go to an EA meeting and say some variant of "we all live in America. Fuck Africa. Who gives a shit if bushmen die of malaria. One American is worth a hundred of them, and so the greatest utility is to stop wasting money over there and spend it over here"... try it and let me know how it goes.
There is no principled, objective, _utilitarian_ way to make the judgement that my utility function is "wrong" but the "maximize lives saved" function is "right". None. It doesn't exist. But utilitarians will pretend it does exist, and invoke 'utility' to silence all viewpoints to the contrary. Ergo, utilitarianism is just a scam used by whomever arbitrarily decided what utility function we're using, to silence anyone who thinks a different function would be better
"The math is definitely telling us, I feel, to own the libs." I LOLed.