16 Comments

Every subscription to gray mirror is $10 a month that could have gone to malaria ridden african children. Gray mirror is literally killing thousands of African children. How could you do this, Curtis?

Expand full comment

Every $10 a month spent on African children could've gone to helping Uncle Yarv navigate his hilarious fish-out-of-water misadventures in the west coast singles scene.

Expand full comment

Thanks for the mention, Lubumbashi and Curtis. I have been persuaded by Peter Singer and Toby Ord and others that Effective Altruism (EA) is a very good thing. Maybe not the ultimate best thing. That's for future generations to decide. But it's hugely better than the delusional idiocy, partisan posturing, and conspicuous charity that passes for 'doing good' nowadays.

I'm not an OG Effective Altruist, but I've been involved with the movement for about six years, have taught three undergrad classes on 'The Psychology of Effective Altruism', and have written a bit about human moral evolution and vacuous virtue signaling vs. evidence-based, rational altruism.

I think Curtis shows some hints of three key misunderstandings here.

The first is underestimating how much attention the EA subculture already gives to these psychological issues -- the mismatch between human nature as it is, and how an 'ideal utilitarian' would ideally act. EAs are acutely, painfully, relentlessly, exasperatedly aware of the cognitive and emotional biases that shape our behaviors towards others, including our narcissism, nepotism, tribalism, anthropocentrism, moral posturing, moral blindspots, ethical inconsistencies, etc. It's at least half of what we talk about. We have no illusions that people are naturally equipped to be good at utilitarian reasoning or EA. We don't even think that most people could ever get very good at it. EA has made a self-conscious decision to remain a small, select, elite subculture rather than try to become a mass movement -- and it's done so precisely because EAs understand that most people are not, and may never be, capable of rational utilitarian reasoning. We're not trying to launch a populist movement of noisy vegan activists. We're trying to quietly set up a sort of Alternative Cathedral based a little more on reason and evidence, and a little less on the usual hegemonic nonsense.

Second, utilitarianism isn't really that complicated or exotic. It's not really about maximizing pleasure. It's about recognizing that all other sentient beings live their lives with a degree of subjective awareness that is often equal to our own (in the case of most other humans), or is a bit simpler, but equally real (in the case of other animal species). That's it. Other beings experience stuff. If you believe your own experience matters to you, you should accept that those other beings have experiences that matter to them. This is a major source of meaning in the world, for people who have any capacity for accepting the truth of it. Everything else about utilitarianism flows from this 'sentientism'.

(Of course, every authentically great religious leader has been a sentientist at heart. Christ's Sermon on the Mount is pure sentientism. If you don't like sentientism, you'll hate Christianity. And, yes, there's an active sub-sub-culture of Christian EAs.)

Third, EA doesn't lead to a fentanyl collapse, because a fentanyl collapse isn't good for the long term. Look, EA ten years ago was focused mostly on making charities more efficient, comparing their cost-effectiveness in solving problems like reducing malaria and river blindness in Africa, or figuring out whether direct cash transfers to poor people work better than 'foreign aid', or trying to keep billions of animals from suffering too much in factory farms. More recently though, the hard core EA people are really focused on long-termism, not just sentientism. They're much more concerned about existential risks to our species and civilization, than with improving a few charities and nudging a few policies around. We've realized you can do all the good you want for the next 10 generations, but if that doesn't lead to 100,000+ generations of sustainable, galaxy-spanning sentience, you're wasting your time. The goal isn't to make the poor of Africa and Asia a little less miserable. The goal is a vigorous, meaningful, awesome interstellar civilization that lasts for a very long time. That's how you maximize aggregate, long-term, sentient value, happiness, and meaning. A billion generations of a trillion people each is a billion trillion sentient lives, and if you like life, and you have some objectivity, you should like the math.

Anything that interferes with that long-term vision is, to the best and brightest EAs, odious. We're aware of short-term/long-term tradeoffs in terms of happiness and suffering. We could turn society into a perpetual Burning Man chasing cool drugs, cool sex, coolness uber alles... until the underclass stops delivering water to the grey dusty Playa. EAs know that these short-term temptations are the Great Temptation -- the looming catastrophe that probably explains the 'Great Filter'. We know that most intelligent life in the universe evolves to chase proxies for biological fitness (happy drugs, happy movies, happy careerism) rather than long-term fitness itself (good babies & civilizations). And then most intelligent life probably burns itself out in short-term pleasure-seeking, bad governance, fiat money, and grim collapse. We talk about this stuff all the time. We want human life to survive the real existential threats (like thermonuclear war, bioweapons, and AI), not just the politicized pseudo-threats (like climate change, inequality, or systemic racism). And EAs are willing to discuss strategies and tactics to sustain civilization over the long term that do not necessarily buy into the classical liberal hegemonic assumptions about how the elites should relate to the masses.

You know who else tends to be long-termist in their orientation?

NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists. Scholars. Crypto dudes. Ornery post-libertarians. Post-Rationalists.

There's a huge intellectual and ethical overlap, and a potential strategic alliance, between NRx and EA. Not to mention a lot of too-school-for-cool people who really should meet each other.

Expand full comment

You can not even begin to solve a problem if it is forbidden to talk about the causes. You observe the vast human tragedy that is the third world. You want to do something about it. That's good and noble, I guess. (I think there is something unhealthy and wrong about putting foreigners above your own people. But that's another matter.)

But then you have to have an accurate map of the territory. How did the third world get this way? Even Curtis censors himself more than he once did on UR. UR would tell you that colonialism was actually pretty grand, how the rule of liberal democracy has produced what you see now, etc. And even UR would never bring up IQ... uh can we even talk about that here?

But the EA bros can't think these thoughts at all. They collect bandaids for a stabbing victim and brag about how much blood loss they prevented. They can feel satisfied that they Did Something and Made An Impact(TM).

You aren't going to bring civilization to outer space if you can't even get it to all of Earth's continents.

Expand full comment

My gut is that, at minimum, it would be gauche to bring up IQ, and especially IQ patterns here. Save that for iSteve.

However, I see hints that suppression of such talk is growing less focused. For example, there is a particular psychometric text that has long had certain pages excluded from Google Books. Some content manage at Google had been very precise in her censoring. I was able to purchase the text online, but the only retailer that came up on the first page of Google results was in Europe - the link to Amazon had been deranked to the second or third page of results.

I checked recently, and found that the pages had been restored to Google Books, and the Amazon link is now on the first page of of results (in second place). They're just not working at it quite as hard.

If you're interested...

https://www.google.com/books/edition/WAIS_IV_Clinical_Use_and_Interpretation/lszPs4JXxBYC?hl=en&gbpv=1&pg=PA118

Expand full comment

Every single EA project looks like a god damn "look at me I'm doing good things" poster, which is odd considering we're supposed to be dealing with classes of ethics that are unintuitive by nature, and we should not expect endorsements for these things from anyone in the typical population or their leaders.

Which any sane person would rationalize as a sober capitulation to the political acceptance field; but if so, where are the receipts for this? Why does it all still sound so noble and perfectly intellectual in their missions?

Practically any dope off the street can come up with "10 ways to make the world a better place that 1,000 people will think is evil until those 1,000 people are educated in the human condition".

The political asymmetry makes it dead on arrival as something that claims to at least resemble utilitarianism.

If it's utilitarianism + politics, then it's politics.

----------

> Anything that interferes with that long-term vision is, to the best and brightest EAs, odious.

Complete and utter horseshit for many reasons, but embarrassingly so on account of the *long-term negative consequences* of projects and charities not being a basic feature of their taxonomy or representation in cadence with their ostensible goals.

Expand full comment

I have a few comments on effective altruism from an evolutionary perspective. I will say upfront that William MacAskill (Doing Good Better) is the better representative of the meme than Ord.

I think there is an important evolutionary component missing from your analysis, Uncle Yarv. Altruism (the adaptive form, not the idealized form) has a very specific function in social critters, ourselves included. Of course this is all rudimentary evolutionary theory, but it is worth pondering why this maladaptive form of altruism has emerged. It is maladaptive because it is often targeted at communities one will never interact with, while leaving one's own community to suffer and rot around them. The very purpose of altruism is to maintain the structural integrity of one's community, but directing that at a community one does not interact with is bound to degenerate, as Uncle Yarv's last paragraph here alludes to. Consider putting 10% of your paycheque into altruistic pursuits pursuant to your own direct community, and then continuing to see your community degenerate. The immediate feedback that your money is obviously not *effective* would motivate anyone to look for other ways to allocate resources to their community, or start searching for a new community to inhabit.

By allocating one's excess resources into a community that is not one's own, both one's community and the community receiving said resources are essentially doomed to fail. I think there's a tie-in with Taleb's Skin in the Game thesis here. Altruism detached from skin in the game is just narcissism. If one has earnest skin in the game, one's altruism actually stands a chance at being effective. A couple clicks on a website that revokes 10% of your paycheque in return for some good feels is not authentic skin in the game, it is only the illusion of skin in the game.

Expand full comment

"random humans more than random dogs"

Yeah, I'll be the exception on this one here. Sorry not sorry.

Expand full comment

I like the addition of random and improbable locations assigned to those who ask these questions

Expand full comment

The dual headquarters of EA are the Bay Area and Oxford, UK. What distinguishes the first from the second is the prevalence of super-wealthy tech people who, because they are what they are, infuse EA with data analysis...i.e. thus determining how best to invest each philanthropic dollar to maximize positive impacts by comparative measurement.

This is the premise, although the themes on the ground are often more curious. Open Philanthropy, for example, established and funded by billionaire Facebook co-founder Dustin Moskowitz, is arguably the foremost funder of EA initiatives. But the foundation's website emphasizes its emphasis on long-term futures, existential risks, and--the pet interest of its CEO---the design of a new Artificial Intelligence-dominated galactic civilization over the next 500 to 1 billion years. Just how one quantitatively measures different actions in support of said AI Singularity is not entirely clear.

The UK branch derives more directly from Peter Singer and several younger philosophers and humanists at Oxford University. But it too asserts that actions can be ranked according to the quantum good they bring to the world, and effective philanthropy consists of supporting those that rank highest. The method is more intuitive than among the Bay Area types--more trusting that the affective will produce the effective.

But of course because EA is a movement, the two camps inevitably blend, thus producing an ever less coherent construct.

Expand full comment

"The implicit assumption of utilitarianism is that the purpose of life is to maximize net pleasure."

This is a straw man, the explicit mission statement of utilitarianism is to maximize net utility. If you think that utility is synonymous w pure, uncut pleasure then the end goal of universal basic fentanyl might actually sound appealing.

But, for most of us, our utility function takes both pleasure & meaning into consideration. What does a reductio ad absurdum look like if we're trying to maximize net utility as defined as 1/4 pleasure & 3/4 meaning? VR D&D pod-life?

That said, I still think utilitarianism is silly. Objective reality is consistent & amenable to rigid systems but morality is subjective and therefore chaotic & unpredictable. It might feel like we're weilding the light of "rationality" or "Science" or whatever but in the end, I suspect any ethical framework more stuctured that "do what feels right" will inevitably have absurd edge cases.

Expand full comment

Any kind of "Good = maximizes-people's-X-on-average" doctrine can be called "Utilitarianism". X doesn't have to be pleasure, whatever that is.

Curtis has said that he wants to maximize every group's average feeling-okay-about-life-ness, which suggests that he diverges from this general formula in that groupings of individuals play some role in the calculation, but he hasn't made it clear exactly how the calculation is to be done. Presumably group-populations are discounted to some extent if not entirely -- otherwise the groupings would be irrelevant. So, for example, if there are 99 million Euros and 1 million Navajos, the Euro-group's average feeling-okay-about-life-ness quantity wouldn't matter 99 times as much as the average Navajo quantity. Would it matter twice as much, thought?

He also diverges from the general "utilitarian" formula in that he seems to think that no situation will be good if even a single group's members feel on average that their lives are crappy.

Maybe he'd say, "When I say 'good' and 'should' I'm just telling you what my friend the coming dictator and I want. We want every group's members to feel okay about life on average. If you share our desire then follow us and we'll get it done."

Another guy might agree that "X is good" means "I want X" but want the maximizing of average pleasure pure and simple. In that case he'd push "hedonic utilitarianism" as a practical program but he wouldn't be a utilitarian in a theoretical sense. Similarly, a "divine command theorist" might think that God has commanded us to maximize average pleasure.

Expand full comment

Once you even start thinking there is a calculation to be done, you've gone down a wrong path. Curtis summarized his views years ago and called himself "pronomian" https://www.unqualified-reservations.org/2008/06/olxi-truth-about-left-and-right/

Pronomianism is like traditional ethics, which is all about honesty. If the Europeans promise to respect the Navajo territory, then invade it, they have done evil. But if they've made no such promise, well, who really owns a piece of land? Their ancestors killed the previous inhabitants and so on. Total utility? Would they think about my utility, if the situation was reversed? If the answer is no, why even consider it?

Expand full comment

What is the "pronomian" account of goodness?

"'I should do X' = 'I promised to do X'" can be called various things.

"'I should do X' = 'I promised to do X'" can be combined with "'X is good' = 'I want X.'" Hobbes endorses the combination of these formulas.

It's a cynical endorsement, because Hobbes doesn't really give a shit about promise-keeping -- not in any deep way.

Which may be why Curtis has said "Might makes right" at least a couple of times.

Hobbes's cynicism has nothing whatsoever in common with Biblical, Stoic, or Platonic accounts of what goodness and obligation boil down to. He likes honesty, sure; everyone likes someone else's honesty.

Expand full comment

This is a short, cutting description of a fundamental flaw in the psychology of leftism (or Christianity, more generally).

We evolved to care for ourselves and our immediate family, then our kin and close friends, then our extended kin and extended network, and only distantly - or not at all - about anyone else.

But to hell with reality, Leftism demands you constantly swallow the lies

Expand full comment

"Pope, I want to be a good person, but I don’t believe in God. And the Pope was like: no problem, my child. Just act as if you did."

"Simulated emotions are bad, kids. Just be you."

Wat Do?

Expand full comment