16 Comments

Every subscription to gray mirror is $10 a month that could have gone to malaria ridden african children. Gray mirror is literally killing thousands of African children. How could you do this, Curtis?

Expand full comment

Thanks for the mention, Lubumbashi and Curtis. I have been persuaded by Peter Singer and Toby Ord and others that Effective Altruism (EA) is a very good thing. Maybe not the ultimate best thing. That's for future generations to decide. But it's hugely better than the delusional idiocy, partisan posturing, and conspicuous charity that passes for 'doing good' nowadays.

I'm not an OG Effective Altruist, but I've been involved with the movement for about six years, have taught three undergrad classes on 'The Psychology of Effective Altruism', and have written a bit about human moral evolution and vacuous virtue signaling vs. evidence-based, rational altruism.

I think Curtis shows some hints of three key misunderstandings here.

The first is underestimating how much attention the EA subculture already gives to these psychological issues -- the mismatch between human nature as it is, and how an 'ideal utilitarian' would ideally act. EAs are acutely, painfully, relentlessly, exasperatedly aware of the cognitive and emotional biases that shape our behaviors towards others, including our narcissism, nepotism, tribalism, anthropocentrism, moral posturing, moral blindspots, ethical inconsistencies, etc. It's at least half of what we talk about. We have no illusions that people are naturally equipped to be good at utilitarian reasoning or EA. We don't even think that most people could ever get very good at it. EA has made a self-conscious decision to remain a small, select, elite subculture rather than try to become a mass movement -- and it's done so precisely because EAs understand that most people are not, and may never be, capable of rational utilitarian reasoning. We're not trying to launch a populist movement of noisy vegan activists. We're trying to quietly set up a sort of Alternative Cathedral based a little more on reason and evidence, and a little less on the usual hegemonic nonsense.

Second, utilitarianism isn't really that complicated or exotic. It's not really about maximizing pleasure. It's about recognizing that all other sentient beings live their lives with a degree of subjective awareness that is often equal to our own (in the case of most other humans), or is a bit simpler, but equally real (in the case of other animal species). That's it. Other beings experience stuff. If you believe your own experience matters to you, you should accept that those other beings have experiences that matter to them. This is a major source of meaning in the world, for people who have any capacity for accepting the truth of it. Everything else about utilitarianism flows from this 'sentientism'.

(Of course, every authentically great religious leader has been a sentientist at heart. Christ's Sermon on the Mount is pure sentientism. If you don't like sentientism, you'll hate Christianity. And, yes, there's an active sub-sub-culture of Christian EAs.)

Third, EA doesn't lead to a fentanyl collapse, because a fentanyl collapse isn't good for the long term. Look, EA ten years ago was focused mostly on making charities more efficient, comparing their cost-effectiveness in solving problems like reducing malaria and river blindness in Africa, or figuring out whether direct cash transfers to poor people work better than 'foreign aid', or trying to keep billions of animals from suffering too much in factory farms. More recently though, the hard core EA people are really focused on long-termism, not just sentientism. They're much more concerned about existential risks to our species and civilization, than with improving a few charities and nudging a few policies around. We've realized you can do all the good you want for the next 10 generations, but if that doesn't lead to 100,000+ generations of sustainable, galaxy-spanning sentience, you're wasting your time. The goal isn't to make the poor of Africa and Asia a little less miserable. The goal is a vigorous, meaningful, awesome interstellar civilization that lasts for a very long time. That's how you maximize aggregate, long-term, sentient value, happiness, and meaning. A billion generations of a trillion people each is a billion trillion sentient lives, and if you like life, and you have some objectivity, you should like the math.

Anything that interferes with that long-term vision is, to the best and brightest EAs, odious. We're aware of short-term/long-term tradeoffs in terms of happiness and suffering. We could turn society into a perpetual Burning Man chasing cool drugs, cool sex, coolness uber alles... until the underclass stops delivering water to the grey dusty Playa. EAs know that these short-term temptations are the Great Temptation -- the looming catastrophe that probably explains the 'Great Filter'. We know that most intelligent life in the universe evolves to chase proxies for biological fitness (happy drugs, happy movies, happy careerism) rather than long-term fitness itself (good babies & civilizations). And then most intelligent life probably burns itself out in short-term pleasure-seeking, bad governance, fiat money, and grim collapse. We talk about this stuff all the time. We want human life to survive the real existential threats (like thermonuclear war, bioweapons, and AI), not just the politicized pseudo-threats (like climate change, inequality, or systemic racism). And EAs are willing to discuss strategies and tactics to sustain civilization over the long term that do not necessarily buy into the classical liberal hegemonic assumptions about how the elites should relate to the masses.

You know who else tends to be long-termist in their orientation?

NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists. Scholars. Crypto dudes. Ornery post-libertarians. Post-Rationalists.

There's a huge intellectual and ethical overlap, and a potential strategic alliance, between NRx and EA. Not to mention a lot of too-school-for-cool people who really should meet each other.

Expand full comment

I have a few comments on effective altruism from an evolutionary perspective. I will say upfront that William MacAskill (Doing Good Better) is the better representative of the meme than Ord.

I think there is an important evolutionary component missing from your analysis, Uncle Yarv. Altruism (the adaptive form, not the idealized form) has a very specific function in social critters, ourselves included. Of course this is all rudimentary evolutionary theory, but it is worth pondering why this maladaptive form of altruism has emerged. It is maladaptive because it is often targeted at communities one will never interact with, while leaving one's own community to suffer and rot around them. The very purpose of altruism is to maintain the structural integrity of one's community, but directing that at a community one does not interact with is bound to degenerate, as Uncle Yarv's last paragraph here alludes to. Consider putting 10% of your paycheque into altruistic pursuits pursuant to your own direct community, and then continuing to see your community degenerate. The immediate feedback that your money is obviously not *effective* would motivate anyone to look for other ways to allocate resources to their community, or start searching for a new community to inhabit.

By allocating one's excess resources into a community that is not one's own, both one's community and the community receiving said resources are essentially doomed to fail. I think there's a tie-in with Taleb's Skin in the Game thesis here. Altruism detached from skin in the game is just narcissism. If one has earnest skin in the game, one's altruism actually stands a chance at being effective. A couple clicks on a website that revokes 10% of your paycheque in return for some good feels is not authentic skin in the game, it is only the illusion of skin in the game.

Expand full comment

"random humans more than random dogs"

Yeah, I'll be the exception on this one here. Sorry not sorry.

Expand full comment

I like the addition of random and improbable locations assigned to those who ask these questions

Expand full comment

The dual headquarters of EA are the Bay Area and Oxford, UK. What distinguishes the first from the second is the prevalence of super-wealthy tech people who, because they are what they are, infuse EA with data analysis...i.e. thus determining how best to invest each philanthropic dollar to maximize positive impacts by comparative measurement.

This is the premise, although the themes on the ground are often more curious. Open Philanthropy, for example, established and funded by billionaire Facebook co-founder Dustin Moskowitz, is arguably the foremost funder of EA initiatives. But the foundation's website emphasizes its emphasis on long-term futures, existential risks, and--the pet interest of its CEO---the design of a new Artificial Intelligence-dominated galactic civilization over the next 500 to 1 billion years. Just how one quantitatively measures different actions in support of said AI Singularity is not entirely clear.

The UK branch derives more directly from Peter Singer and several younger philosophers and humanists at Oxford University. But it too asserts that actions can be ranked according to the quantum good they bring to the world, and effective philanthropy consists of supporting those that rank highest. The method is more intuitive than among the Bay Area types--more trusting that the affective will produce the effective.

But of course because EA is a movement, the two camps inevitably blend, thus producing an ever less coherent construct.

Expand full comment

"The implicit assumption of utilitarianism is that the purpose of life is to maximize net pleasure."

This is a straw man, the explicit mission statement of utilitarianism is to maximize net utility. If you think that utility is synonymous w pure, uncut pleasure then the end goal of universal basic fentanyl might actually sound appealing.

But, for most of us, our utility function takes both pleasure & meaning into consideration. What does a reductio ad absurdum look like if we're trying to maximize net utility as defined as 1/4 pleasure & 3/4 meaning? VR D&D pod-life?

That said, I still think utilitarianism is silly. Objective reality is consistent & amenable to rigid systems but morality is subjective and therefore chaotic & unpredictable. It might feel like we're weilding the light of "rationality" or "Science" or whatever but in the end, I suspect any ethical framework more stuctured that "do what feels right" will inevitably have absurd edge cases.

Expand full comment

Any kind of "Good = maximizes-people's-X-on-average" doctrine can be called "Utilitarianism". X doesn't have to be pleasure, whatever that is.

Curtis has said that he wants to maximize every group's average feeling-okay-about-life-ness, which suggests that he diverges from this general formula in that groupings of individuals play some role in the calculation, but he hasn't made it clear exactly how the calculation is to be done. Presumably group-populations are discounted to some extent if not entirely -- otherwise the groupings would be irrelevant. So, for example, if there are 99 million Euros and 1 million Navajos, the Euro-group's average feeling-okay-about-life-ness quantity wouldn't matter 99 times as much as the average Navajo quantity. Would it matter twice as much, thought?

He also diverges from the general "utilitarian" formula in that he seems to think that no situation will be good if even a single group's members feel on average that their lives are crappy.

Maybe he'd say, "When I say 'good' and 'should' I'm just telling you what my friend the coming dictator and I want. We want every group's members to feel okay about life on average. If you share our desire then follow us and we'll get it done."

Another guy might agree that "X is good" means "I want X" but want the maximizing of average pleasure pure and simple. In that case he'd push "hedonic utilitarianism" as a practical program but he wouldn't be a utilitarian in a theoretical sense. Similarly, a "divine command theorist" might think that God has commanded us to maximize average pleasure.

Expand full comment

This is a short, cutting description of a fundamental flaw in the psychology of leftism (or Christianity, more generally).

We evolved to care for ourselves and our immediate family, then our kin and close friends, then our extended kin and extended network, and only distantly - or not at all - about anyone else.

But to hell with reality, Leftism demands you constantly swallow the lies

Expand full comment

"Pope, I want to be a good person, but I don’t believe in God. And the Pope was like: no problem, my child. Just act as if you did."

"Simulated emotions are bad, kids. Just be you."

Wat Do?

Expand full comment