Charles from Lubumbashi writes:
I find utilitarianism and effective altruism of the Peter Singer (infanticide) / Toby Ord (The Precipice) variety persistently persuasive (simultaneously agreeing with basically all your opinions). Geoffrey Miller I believe is also similarly persuaded by both these thinkers and also your thought. You’ve spoken/written on these two topics a few times here and there.
But a thorough commentary/debunking of these two (this one?) memeplex would go a long way I think, because these opinions are very popular with the most impressive young cosmopolitan armigers, as opposed to the other memeplex (you know the one) which is more popular with the less impressive/more peripheral aristocrats. This question isn’t exactly self help but I feel like it would nonetheless help my self to hear your extended take on these memeplexes and the brains they inhabit.
Thank you, Charles, for stretching the boundaries of “self-help” back to our usual highbrow intellectual fare.
Utilitarianism and “effective altruism,” which are both very flawed ways to think, are even more flawed when you put them together. But somehow they are exactly right for attracting bright young ladies. And it is undeniable that they have a certain foolish consistency. Then again, so does Hitler. But they are not so into Hitler, the ladies… let me help you help yourself with some simple, unrefined “street” philosophical self-defense techniques for the next time you feel targeted by these types of females.
Utilitarianism
The implicit assumption of utilitarianism is that the purpose of life is to maximize net pleasure. Utilitarianism is inherently hedonistic. Even GDP, a utilitarian concept, is adjusted “hedonically.”
Utilitarianism is always vulnerable to the old reductio ad absurdum. The absurdum here is basically: fentanyl. Utilitarianism, taken seriously, suggests a switch to a new “fentanyl economy” with unprecedented productive powers, in which unprecedented quantities of pure pleasure can be produced with unprecedented efficiency.
Utilitarianism pretends to be a universal theory which abides no exceptions. Yet if we admit this one exception—hardly an exceptional exception—we see that the theory is not universal, which is sufficient to refute it. Once you start looking around for other features of the 20th-century economy which are not as absurd as the fentanyl economy, but are nonetheless absurd—do you find them? Wake up and smell the coffee.
Effective altruism
EA is a remarkably 20th-century concept: it has the typical 20th-century fantasy of inventing a superhuman humanity. This is also a typical American fantasy: Philly means “the city of effective altruism.” This dream of universal philia is anything but new, but selling old wine in new bottles is another 20th-century thing.
The problem with “effective altruism” is not that it is not good, but that it does not work. Brotherly love is an emotion, not an idea.
Somewhere out there, there are EA people who have gone past dogs, pigs and mice, committed to the uncute, unfuzzy reptilia, and finally learned to feel true philia for insects. Covid can only be next—is vaccination ethical? Think of all those viruses that will never be born…
Of course, no one actually feels brotherly love for a virus or a bug. What they are doing is to simulate brotherly love for a bug, or at least a pig. In the century of narcissism it is de rigueur for everyone to take everyone else’s simulations at face value. Utilitarianism is vulnerable to a reductio ad absurdum; effective altruism is a reductio ad absurdum.
The problem with effective altruism is that real sympathy, philia, is a real emotion. When the real emotion is displaced by a narcissistic abstraction, problems arise. I forget who said that no one’s heart was colder than that of a true humanitarian.
Effective altruism is the extreme end of what Dickens called “telescopic philanthropy.” Mrs. Jellyby in Bleak House is obsessed with orphans in Africa—Dickens is satirizing the Niger Expedition of 1841, the West’s first major step into what would later become the present aid-industrial complex—and neglects her family and everyone around her. EA has wrecked her capacity for genuine sympathy, like porn destroying the capacity for sex. The abstraction has driven out the emotion.
But if Mrs. Jellyby is good for the orphans in Africa, isn’t the abstraction still good?
That’s the problem—she isn’t. Because her sympathy is fundamentally narcissistic, it doesn’t work. The Niger Expedition was a disaster. So is the aid-industrial complex. And why doesn’t it work? Because no one actually cares if it works.
Simulated emotions are bad, kids. Just be you. When you’re just you—you’ll discover that you love your family more than your neighbors, your neighbors more than random fellow citizens, random fellow citizens more than random humans, random humans more than random dogs, and random dogs more than random lizards.
Can you change this? Maybe a little… but using your forebrain to tell your emotions what to feel is usually a recipe for failure. Or to put it differently: it is superhuman. Effective altruism is a political movement which is not effective until it persuades large numbers of people to become superhuman. What could go wrong?
What could go wrong is that you get not mass superhumanity, but mass simulation. Which is sending you straight down the path to the 20th century’s mass insanities.
And you would think the most interesting intellectual question for rational, effective altruists is not even staving off the AI-pocalypse, but looking around for the elephant in the room—the fact that the concept of “effective” altruism implies the existence of an ineffective altruism.
Why is there all this ineffective altruism around? Surely its creators intended it to be effective? So why, if we do create new, effective institutions of altruism, will they not become ineffective (or even counterproductive) in the same way? Put that in your pipe and smoke it, lady.
Every subscription to gray mirror is $10 a month that could have gone to malaria ridden african children. Gray mirror is literally killing thousands of African children. How could you do this, Curtis?
Thanks for the mention, Lubumbashi and Curtis. I have been persuaded by Peter Singer and Toby Ord and others that Effective Altruism (EA) is a very good thing. Maybe not the ultimate best thing. That's for future generations to decide. But it's hugely better than the delusional idiocy, partisan posturing, and conspicuous charity that passes for 'doing good' nowadays.
I'm not an OG Effective Altruist, but I've been involved with the movement for about six years, have taught three undergrad classes on 'The Psychology of Effective Altruism', and have written a bit about human moral evolution and vacuous virtue signaling vs. evidence-based, rational altruism.
I think Curtis shows some hints of three key misunderstandings here.
The first is underestimating how much attention the EA subculture already gives to these psychological issues -- the mismatch between human nature as it is, and how an 'ideal utilitarian' would ideally act. EAs are acutely, painfully, relentlessly, exasperatedly aware of the cognitive and emotional biases that shape our behaviors towards others, including our narcissism, nepotism, tribalism, anthropocentrism, moral posturing, moral blindspots, ethical inconsistencies, etc. It's at least half of what we talk about. We have no illusions that people are naturally equipped to be good at utilitarian reasoning or EA. We don't even think that most people could ever get very good at it. EA has made a self-conscious decision to remain a small, select, elite subculture rather than try to become a mass movement -- and it's done so precisely because EAs understand that most people are not, and may never be, capable of rational utilitarian reasoning. We're not trying to launch a populist movement of noisy vegan activists. We're trying to quietly set up a sort of Alternative Cathedral based a little more on reason and evidence, and a little less on the usual hegemonic nonsense.
Second, utilitarianism isn't really that complicated or exotic. It's not really about maximizing pleasure. It's about recognizing that all other sentient beings live their lives with a degree of subjective awareness that is often equal to our own (in the case of most other humans), or is a bit simpler, but equally real (in the case of other animal species). That's it. Other beings experience stuff. If you believe your own experience matters to you, you should accept that those other beings have experiences that matter to them. This is a major source of meaning in the world, for people who have any capacity for accepting the truth of it. Everything else about utilitarianism flows from this 'sentientism'.
(Of course, every authentically great religious leader has been a sentientist at heart. Christ's Sermon on the Mount is pure sentientism. If you don't like sentientism, you'll hate Christianity. And, yes, there's an active sub-sub-culture of Christian EAs.)
Third, EA doesn't lead to a fentanyl collapse, because a fentanyl collapse isn't good for the long term. Look, EA ten years ago was focused mostly on making charities more efficient, comparing their cost-effectiveness in solving problems like reducing malaria and river blindness in Africa, or figuring out whether direct cash transfers to poor people work better than 'foreign aid', or trying to keep billions of animals from suffering too much in factory farms. More recently though, the hard core EA people are really focused on long-termism, not just sentientism. They're much more concerned about existential risks to our species and civilization, than with improving a few charities and nudging a few policies around. We've realized you can do all the good you want for the next 10 generations, but if that doesn't lead to 100,000+ generations of sustainable, galaxy-spanning sentience, you're wasting your time. The goal isn't to make the poor of Africa and Asia a little less miserable. The goal is a vigorous, meaningful, awesome interstellar civilization that lasts for a very long time. That's how you maximize aggregate, long-term, sentient value, happiness, and meaning. A billion generations of a trillion people each is a billion trillion sentient lives, and if you like life, and you have some objectivity, you should like the math.
Anything that interferes with that long-term vision is, to the best and brightest EAs, odious. We're aware of short-term/long-term tradeoffs in terms of happiness and suffering. We could turn society into a perpetual Burning Man chasing cool drugs, cool sex, coolness uber alles... until the underclass stops delivering water to the grey dusty Playa. EAs know that these short-term temptations are the Great Temptation -- the looming catastrophe that probably explains the 'Great Filter'. We know that most intelligent life in the universe evolves to chase proxies for biological fitness (happy drugs, happy movies, happy careerism) rather than long-term fitness itself (good babies & civilizations). And then most intelligent life probably burns itself out in short-term pleasure-seeking, bad governance, fiat money, and grim collapse. We talk about this stuff all the time. We want human life to survive the real existential threats (like thermonuclear war, bioweapons, and AI), not just the politicized pseudo-threats (like climate change, inequality, or systemic racism). And EAs are willing to discuss strategies and tactics to sustain civilization over the long term that do not necessarily buy into the classical liberal hegemonic assumptions about how the elites should relate to the masses.
You know who else tends to be long-termist in their orientation?
NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists. Scholars. Crypto dudes. Ornery post-libertarians. Post-Rationalists.
There's a huge intellectual and ethical overlap, and a potential strategic alliance, between NRx and EA. Not to mention a lot of too-school-for-cool people who really should meet each other.