Meant to leave this on your previous post but I might as well leave it here:
As a recovering rationalist, I have a particular disdain for utilitarianism. Put simply: utilitarianism is a scam used by the person who decrees what the utility function is, to silence dissent towards their leadership.
Think about it. At the end of the day, what is utilitarianism? Utilitarianism is a philosophy that says "things with more utility are better than things with less utility". Ok, sure, fine, that's cool. What is utility? Utility is good things that make people happy, for some philosophical definition of happy.
Ok, sure, fine, that's cool. Who decides what that definition is? When two people disagree, who decides which one is right? In a normal conflict of values, the decision function is "power", as you've written about at length. In a utilitarian context? Well, in my experience, it mostly looks like the same kind of social mind games that the woke pull. And if utilitarian ethics reduces to 'social status manipulation games' whenever there's a conflict, then how is that not just tyranny with extra steps?
You've actually hit on one of my favourite examples.
> Personally, to save one of my own children, I might very well condemn all of Central Asia to death by trolley. To save them both I could throw in Brazil. I must be a very ineffective altruist, I’m afraid.
Every instance of utilitarianism I've ever seen, seems to value every human life equally and interchangeably. Utilitarianism seems to always say that one life in my neighbourhood is exactly equal to one life off in some far-flung jungle. Oddly, I can't seem to get to that point from "more utility is better", and yet somehow they always do
Here's my proposition: human lives are not equal, and there is more utility in saving lives that matter more, relative to saving lives that matter less. Not only are human lives not equal, but human lives don't even have an objective, concrete, singular value. The evaluated value of a human life depends on the social distance from the evaluator. My family's lives are more valuable than other peoples' lives, _to me_. This can be true even while recognizing at the same time that their lives are _not_ more valuable to other people.
"True" Utilitarianism would be agnostic on the question of whether or not my particular metric for utility is correct. It would simply say, taking my metric as an axiom, it is good to maximize utility. And yet, for some reason, if you were to, for example, go to an EA meeting and say some variant of "we all live in America. Fuck Africa. Who gives a shit if bushmen die of malaria. One American is worth a hundred of them, and so the greatest utility is to stop wasting money over there and spend it over here"... try it and let me know how it goes.
There is no principled, objective, _utilitarian_ way to make the judgement that my utility function is "wrong" but the "maximize lives saved" function is "right". None. It doesn't exist. But utilitarians will pretend it does exist, and invoke 'utility' to silence all viewpoints to the contrary. Ergo, utilitarianism is just a scam used by whomever arbitrarily decided what utility function we're using, to silence anyone who thinks a different function would be better
The utility function always seems geared to deliver "rational" justification for some subjectively chosen goal. It's "the ends justify the means" with extra steps.
One of the things that make Mike's life objectively better than Tom's might be that Mike cares more about his own kids than about some stranger's kids while Tom doesn't care more about his own kids than about some stranger's kids.
There aren't any Toms, of course. But more realistically: if you care more about people who are like you (even if they're strangers) than about people who aren't like you, then your life is better than the life of someone who doesn't care more about people who are like him than about people who aren't like him.
This looks like a good read! Reminiscent of Friston. At first glance it seems that the thesis derives the notion of a utility function, which is categorically distinct from an aggregate utility function, which is the subject of Eidens critique.
Correct, I'm not interested in defending any kind of benthamite utilitarianism at all, only a fairly generic and physically-informed version of the basic notions, which I believe is not only on much better grounds but also has the capacity to illuminate the provenance of preferences that, in more liberal social circles, would be seen as arbitrary.
(e.g. under this perspective it's universalist altruism that's superstitious, not tribalism, which is a reasonable response to the default, total natural disorganization of physical systems)
((Part of the subtext here is that I'm saying that the vast majority of self-styled rational utilitarians are trash at it))
It's really just the Calculation Problem all over again if you think about it. I feel like making it about 'sentientism' makes it easier to obscure this fact because animals are less complex and thus by including animals you simplify the problem on average - i.e. you can't say all humans prefer to exist but you can say that sentient creatures generally speaking prefer to exist.
EA makes a lot of people uncomfortable because it highlights just how little serious thought they've given to (1) the ethical implications of other sentient lives being sentient, and (2) future/potential sentient lives being ethically important, and vastly out-numbering present lives.
I'm well aware that we're evolved to treat 99.9999+% of other sentient beings as non-sentient and unworthy of moral concern. I've been teaching evolutionary psychology since 1990. Parenting, nepotism, tribalism, and anthropocentrism run deep, obviously, and for good adaptive reasons. They're Lindy. Time-tested and battle-proven. But that doesn't mean they're ethical in any principled or aspirational way.
If natural = good, then modern leftist woke identity politics is also good, because runaway virtue-signaling, self-righteous moral panics, and performative sentimental collectivism are based on moral instincts that also run deep. But these moral instincts aren't always good. They're often just short-sighted, self-serving, and idiotic. The naturalistic fallacy can't adjudicate what kinds of morals and political aims are worth adopting, now, given modern technological civilizations and Darwinian self-awareness.
In case anybody is curious to learn a little more about Effective Altruism as it's actually practiced -- rather than some of the straw man portrayals here -- the course syllabus for my 'Psychology of Effective Altruism' course is here, including an extensive reading list and links to some good videos: https://www.primalpoly.com/s/syllabus-EA-2020-spring-final.docx
I come from a philosophical background. I even got a degree. I don't often mention it, because the obvious "haha a Big Mac with medium fries, please" jokes, which are mostly on point insofar as the real-world applicability of such degrees is concerned.
I am very much familiar with utilitarianism and all of its contemporary offshoots. Once upon a time, I also used to find something like "effective altruism" an interesting and even possibly appealing idea. I know Peter Singer very well. My lecturer in ethics (who was, of course, a vegan feminist) was a massive fan.
You should note that nowhere did I say in my original comment that simply because not caring about other people is natural, it is also "good", ethically speaking. So just cool it with the "naturalistic fallacy" remarks. What I DID say was that being an effective altruist is most likely not a very efficient survival strategy, evolutionarily speaking. I do not see you challenging that point in your comment.
And this is already a big problem. Any system of ethics essentially prescribes a certain pattern of behavior to be followed throughout life. The base level evaluation of the effectiveness of any pattern of behavior is how antifragile is it to evolutionary pressures. Rationally, your pattern of behavior might be perfectly logically consistent, deduced from axioms which every rational human being considers self-evident. Great. But if in practice it leads to you being pwned by free-riding egoists, it is absolutely worthless.
Jesus is cool and all, but he was a literal God. Dying on the cross to save humanity is God's work, not any man's.
You talk about the "short-termism" of modern society e.g. "We know that most intelligent life in the universe evolves to chase proxies for biological fitness (happy drugs, happy movies, happy careerism) rather than long-term fitness itself (good babies & civilizations)." but how are you going to fight that? What does "good" babies & civilizations mean? Are there, perhaps, "bad babies"? And even whole civilizations which are bad? "Bad" things supposedly shouldn't exist, right? Is this *gasp* eugenics?!
The so-called "real" existential threats cited by you are actually pretty well publicized already. By now everyone and their grandmother has seen an interview with Elon Musk or some other random tech entrepreneur ringing the bell on "existential AI risk". Everyone knows about nukes - BY FAR the most publicized "existential threat" over the 20th century. And bioweapons are still one of the favourite excuses of America when it talks about "spreading democracy" somewhere.
Here is one truly unpublicized existential threat - collapsing fertility rates in all developed societies. Given EA's preoccupation with *future* generations, shouldn't this be your top priority? How do we deal with it?
Brief reply: of course some babies and some civilizations are better than others. We wouldn't have evolved to do intuitive mate choice for indicators of genetic quality if offspring didn't differ in genetic quality in ways that matter to their future lives, and to their potential reproductive success in turn.
And we bother worrying about the current state of our civilization if some civilizations weren't better than others. We wouldn't be here on Gray Mirror debating how to improve and defend our civilization if all civilizations were equal.
"EA makes a lot of people uncomfortable because it highlights just how little serious thought they've given to (1) the ethical implications of other sentient lives being sentient, and (2) future/potential sentient lives being ethically important, and vastly out-numbering present lives."
It's rather the opposite.
EA's (2) ("future/potential sentient lives being ethically important") only became a thing at the moment in history were people care more about themselves and less of the polis, future generations, or even their own spouses and offspring.
It's a narcissistic compensation to the above - same as "caring for the people in Africa" is more oftent than not copout for not caring for those in one's life and one's neighborhood (and a career if you can get in on the racket).
As for (1) it's, in the same manner, what happens when people are less than ever connected with nature (while it pretends to be all about connecting with it).
People living close to nature, near animals, and having animals of their own to feed and tend (like humans in rural areas did for millenia), can kill an animal with nary a second thought. They'll gladly eat animals too.
Urbanite Starbucks dwellers for whom animals have always been things in Disney movies, documentaries, or, at best, their cats and dogs, on the other hand, turn vegan and go on about "the ethical implications of other sentient lives being sentient".
In this sense, utilitarianism and EA ends up being modern mystification and compensation.
> In case anybody is curious to learn a little more about Effective Altruism as it's actually practiced --
I'm not going to read your link, so if that is sufficiently bothersome to you, feel free to ignore this comment
But I take exception to this. I'm not speaking for any other commenters. But what I am addressing is in fact EA as it's actually practiced.
An academic course on a subject is rarely if ever representative of how things are _actually_ practiced, in the real world. It doesn't really matter what the Official™ definition of EA is. "EA as it is actually practiced" means, literally and directly, "all and only the actions, as taken by people who self-identify as members of EA, that are claimed to be motivated by EA"
I've been reading LW and SSC for almost a decade now. I have encountered lots and lots of self-identified EAs. I have heard lots and lots of the ideas that they have posted, ideas that they claim are motivated by EA and designed to achieve EA goals. My comments are in reference to those people, those ideas, and my experiences thereof.
My core claim is that "utility" is poorly defined. In fact, not only is it poorly defined, it is not objectively defineable in principle. At best you can say "assuming we all agree that X and Y and Z, A would generate more utility than B". I disagree with X and Y and Z, and there is no way, not even in principle, to use utilitarianism to adjudicate the disagreement.
This would be fine, if we all agreed with X and Y and Z. This would also be fine if EAs scoped their claims to "assuming we all agree with X and Y and Z". But that's not what happens in practice. What happens in practice is that they will assert (usually somewhat indirectly) that X and Y and Z are obviously and self-evidently justified by utilitarianism, and the discussion ends there.
To use the concrete example:
assuming we all agree that each human life is of equal utility value for the purposes of saving lives, then the most optimal use of altruism is mosquito nets
I reject that each human life is of equal utility value
Every conversation I have ever had on this subject ends with my counterparty saying some variant of "yeah but those lives are easier to save, so you can save more of them, and more lives is bigger than less lives, so that is more optimal!" This presupposes what I'm rejecting: that 2 lives is more utility than 1 life, regardless of other factors.
As an aside: just wanted to let you know that, years ago, the book and podcast you recorded with Tucker Max was a major positive influence on my life. Thank you for putting that into the world
You're welcome for the Mate book and Mating Grounds pocast. Glad they were useful! My dating advice for single guys should, I hope, be less controversial than EA seems to be.
To be clear, it was helpful for improving my mindset. It was not helpful for finding dates or partners. And, incidentally, if you ever release an updated version: "Move to Austin" is no longer good advice. It is very, very bad advice for dating
> the ethical implications of other sentient lives being sentient, and (2) future/potential sentient lives being ethically important, and vastly out-numbering present lives.
You haven't actually explained why I should care, you're just stating a fact (other lives are sentient) that I already know, and asserting that it matters. But why? Can you give me a concrete reason why I should care if some guy in Madagascar is having a bad day? Let's posit that I don't care. Where do we go from here?
Caring about the suffering of arbitrary other sentient beings is not a Nash equilibrium, which means it is susceptible to holiness spirals. *You* donate 10% of your income to effective charities, but *I* donate 11%. Why is 10% a principled choice? How can you justify any personal experience of pleasure when that time or money could be used to help suffering beings?
Therefore, EA is a recipe for feeling guilty all the time, and if I wanted that I'd go with a more developed tradition like Catholicism. The alternative would be to extend your personal locus of control to all of humanity. Do you agree or disagree with the following statement: "If I (or a person of my choosing) were appointed *dictator* of the human race, there would be more utility in the world". If you disagree, why do you think that it would not be a good course of action?
The essence of his argument is passing moral judgement on you:
> To me, that's a deeply unethical view of life, but YMMV.
You're a bad immoral person, while I'm a good moral person! I win altruistic virtue signalling points! And I get the cool friends and a hot wife to reproduce with! Heck yeah, evolution built me to virtue signal my altruism so hot chicks can know I'm the good guy that makes good babies!
Goodhart's law applies here, altruism *was* a good fitness signal, but it isn't anymore because it can be faked in really bad ways for our commons. Or perhaps it just breaks down when your group is bigger than 50 people (hunter-gatherers). We're not evolved enough yet for living in civilizations. The buck stops at our genes, all behaviour is rooted in genes, genetics is destiny, and long-term outcomes are completely determined by our genes. Our genes are faulty. Or perhaps mine aren't (re virtue signalling), but the average is still crap.
The path to defeating this sophistry is to expose it for what it is, and to say that you don't fucking care. And then it will spread as more people join in.
How did we go from him not caring about a guy in Madagascar to him not caring about other beings suffering? Also, why are those the only options being considered?
The problem is, in the end none of this matters. No matter how far into the future you try to optimize your utility function, the result will always be multiplied by the big zero that is the heat death of the universe.
Unless, of course, there does exist an eternal reality which transcends our present material existence. If there is even the slightest chance that such a reality exists, then the only rational, utility maximizing path is one which involves genuine spiritual seeking to find it.
(Yes, I am familiar with Pascal's Mugger. The thing is, Pascal was (more or less) right, and rationalists have fundamentally misunderstood the dilemma).
But in the event that there is no eternal form of sentient existence, I see no reason to believe that transcendent ethical norms (like a norm to maximize utility) exist either. If such is the case, my preference would be live a life that feels meaningful and has joy. Such a life would likely involve concern and even sacrifice for others, but I would see no point in burning myself out trying to win a game in which there is no referee and no one keeping score.
I think if you put Toby Ord and Yarv in a room together and locked the door, you’d get Ord to agree to something like “owning the libs is the most effective altruism”. Would you agree that regime change in USA of the kind conceived here is the imperative of an Eff Alt?
One might argue for example that this list by Ord: https://80000hours.org/2020/04/longtermist-policy-ideas/ can best be achieved, or is only really likely to be achieved, by a regime of the kind that Yarvin has conceived here on GM.
I think there's a strong case that 'owning the libs' is a legit longtermist policy idea. Whether EA as a millennial semi-academic community can be convinced of this is an open question.
"EAs understand that most people are not, and may never be, capable of rational utilitarian reasoning."
I am sorry, but these people are retarded.
There is nothing "irrational" about caring vastly more about those close to you than other people. In fact, such behaviour, prioritizing the well being of those who are most closely related to us is profoundly rational from a genetic and evolutionary point of view. Have these people read Dawkins at least? Caring about unrelated people as much as about related people ends up with your genes being bred out of the genepool. Which is why we have billions and billions of people alive with obscenely xenophobic and tribalistic behaviour and very, very few nerds with effectively altruistic behaviour. These people are not even meant to be, they are nature's accidents, because nature abhors such blind universalist altruism.
You should make a post "Effective Altruism considered retarded" like you did a decade ago about Futarchy.
But on the general topic of long-term philosophy - it is not about ethics. Every single one of the little sects mentioned by Mr. Miller (NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists. Scholars. Crypto dudes. Ornery post-libertarians. Post-Rationalists.) do indeed have a long-term vision. They ALL believe they want to maximize long-term value. Even the libs, as you mention, Curtis. And yet they disagree so much with regards to how that long-term vision looks like. Why? Aren't they all trying to maximize "human value"?
When you drill deep enough into ethics, you realize it is all about aesthetics. All of these little sects simply have their own different views on what they consider "beautiful", what they consider a "good life" and of course - "human value". And it is all about aesthetic preference. This is not aesthetics in the narrow craft sense as in what music genre or architectural style you like. This is "broad" aesthetics - the aesthetics of human life, the aesthetics of being.
Crypto-anarchists have one view on life aesthetics. Communists have the near opposite, white nationalists are vastly different from both, and the libs are very different in their live aesthetics from all of them combined.
The history of human civilizations is different peoples taking turns enforcing their own views of aesthetics on one another.
(Of course, every authentically great religious leader has agreed with me at heart. Christ’s Sermon on the Mount is pure agreement with me. If you don’t agree with me, you'll hate Christianity. And, yes, there's an active sub-sub-culture of Christians who agree with me.)
I mean...You basically said Jesus would have held your position. I can't think of a world where that helps your argument unless your expect a significant portion of the audience to have double-digit IQs.
Engaging with "EA" and adjacent claptrap as if they were stand-alone ideas (like, say, a purported proof of the Poincaré conjecture) is a catastrophic mistake.
Political ideas do not stand alone. More than anything it matters *who* is pushing something. And it isn't any kind of secret that "EA", "rationalism", etc. are effectively a rerun of the Comintern -- with SV oligarchs (and their pet Yudkowskys) in the role of the Party.
Biting a hook is a bad move for a fish, observe, *not* because of any objective defect of *the hook*.
EAism provides the familiar Christian toolkit for committing atrocities with a clear conscience (with "moral imperative", even), with ready justifications for arbitrarily dystopian reichs. As well as stoking the narcissistic "we", with a rich selection of "we must..."s satisfying for tenured professors and redditards alike.
But this is simply a description of the mechanisms whereby the ethics-flavoured rootkit wrecks your brain. More interesting is to look at *who* is trying to install it in your head; and the *why* quickly becomes apparent.
Categorical rejection of ideas on account of their source is not, as the "enlightened" sociopaths want you to believe, an act of folly. Rather it is your first and often only line of defense against a caste of parasites who see your mind as their plaything, your will and personal sense of right and wrong as an obstacle, and (openly!) equate you with a broiler chicken.
Main thing why I've been always skeptical of the EA is that I haven't seen them attacking leftism.
All kinds of lefties have caused huge problems all over the world. From the old hardcore worker commies to the current woke libtards, they all create misery and ruin societies.
The root cause for a lot of suffering in the world is leftism. That's why reducing its impact should be one of the main priorities of effective altruism. But for some reason, it's not...
I agree that this is a big problem in EA. It's tried to stay relatively non-political. But given that it's dominated by academically elite Millennials and Gen Z, there's a lot of implicit wokeness.
Many times the initial reasoning of effective altruists is sound. A group of people is missing something cheap which would make their lives better. Maybe heal some illnesses or prevent them happening in the first place.
But my question is: If the solution is so easy and cheap, why these people are not using it already?
There is usually something much more serious going on.
If you give them a cheap medicine, they might not die from that particular illness anymore. But because nothing was done to fix the root cause, they will probably die from something else. Or they will live longer but their life still sucks.
That's why it would be important to dig deeper to get to the root cause and fix it. If you do that, then lots of other secondary problems will be fixed automatically without any effort.
If you do an honest evaluation of the situation, very many times you will see how the root cause is more or less related to leftism.
That's why, in my eyes, if a person wants to really be an effective altruist, the best bang for the buck would come from effectively opposing leftism in its all forms.
The well-being of the whole society depends on politics. That's why it sounds a little bit weird that effective altruists try to stay non-political.
This sounds deeper than it is. The root cause of millions of people dying of malaria is malaria and mosquitoes, and if you eliminate them by donating more to Anti Malaria Foundation (rather than, say, Make a Wish foundation for terminally ill kids), then those people will live longer, healthier, more productive, less exhausted lives. Sure, they'll die of other stuff later, but in the meantime, their lives and societies are better. Sometimes these things are just pragmatic problems that don't need a huge overlay of geopolitics or moral philosophy.
I think eliminating smallpox was a clear net win for humanity. Likewise for eliminating malaria, or river blindness, or Covid.
"The first is underestimating how much attention the EA subculture already gives to these psychological issues—the mismatch between human nature as it is, and how an ‘ideal utilitarian’ would ideally act. (...) We have no illusions that people are naturally equipped to be good at utilitarian reasoning or EA. We don't even think that most people could ever get very good at it."
And still, this manages to miss the issue, even while it explicitly states it (albeit reversed): it's not humans who aren't good at utilitarian reasoning, it's rather those good at utilitarian reasoning that aren't good at "human".
Sure, EA advocates are humans in the sense that they're like you and me (especially the nerdier of us), in that they have rights, in that if you prick them they will bleed, and so on. Some would probably even ace a Voight-Kampff. But they aren't quite there in several ways that matter.
They're part of a subset of people - I'll include me - that struggle to human (what with autism, being a nerd, late stage capitalism, and so on). The difference is that EAs, along with a number of other factions, consider this an advantage, and make a project out of it. Kind of like incels making a project out of being unable to get laid.
The problem is that by not getting humans in general, they also not get what humans want, what makes humans happier (heck, not even what makes them mentally healthy). They surely don't get what makes humans tick, and they can barely understand societies (at best they can analyze them, and make some abstract maps, which they mistake for the territorry).
EA recipes are as if an alien race with a totally different civilization, habbits, priorities, and values comes to teach humans what to do "to be happy". It doesn't matter if they're more intelligent. It doesn't even matter if their recipes can work in a human context, or at least be enforced.
Let's even say they're better than "normies". They're, say, to most, what a Vulcan is to a human. Still, Vulcans teaching humans what they should do and how to be happy is a recipe for disaster, that only a nerd with illusions of grandeur and/or revenge fantasies would consider viable. Vulcans don't get humans. Humans get humans.
The best Vulcans, a minority, can do, is to try to learn to human in a human society. Humans don't want and don't need their advice (nor will they listen to it. Kind of like those seeing don't need a blind man's advice on how to paint - or even what color their house should be.
There is a general understanding in our ever-so-scientific culture that objectivity equals good and subjectivity equals bad. You yourself have written plenty about “scientific public policy”, the Cathedral, etc. I feel that this age sorely needs some Kierkegaardian subjectivity, the realization that objective relations are for objects and subjective relations are for subjects. Objectively speaking, one wife is less than ten tribesmen—but to me, subjectively, one wife is infinitely greater. And that’s a good thing.
Geoffrey writes:
“Christ’s Sermon on the Mount is pure sentientism. If you don’t like sentientism, you'll hate Christianity.”
As a Christian, or at least a Mormon if we don’t count as that, I can confidently say that the second commandment in the law is “thou shalt love thy neighbor as thyself.” The words “thy neighbor” are significant. Christ did not tell me to love ten tribesmen as I do my wife—on one hand are strangers, on the other a sort of super-neighbor, close both geographically, emotionally, and familiarly. The true divine wisdom is to love one’s neighbor. Of course, neighborliness takes many forms; Christ gives the parable of the good Samaritan to remind us that geographic neighbors have a claim on our goodwill even if they are not our ethnic or religious neighbors. But I digress.
The experience of a fly is real (imo), but the fly is hardly my neighbor. My duties extend in concentric growing circles. First is my duty to my own center, that is, to God. Second is to myself and my wife, who is bone of my bone and flesh of my flesh. Then to my children (first is on the way) and family, then my community, nation, and finally the whole human race. Maybe outside that is a another circle for higher mammals and then another for vertebrates, but at this point the circle has grown so diffuse that it is almost meaningless and simple hunger on the part of an inner circle’s members takes precedence over their very lives.
I think this is the only human way to live inasmuch as the human is a subject and not an object. Kierkegaard uses the formula “subjectivity is truth”, and I think this well summarizes the whole affair. You write of shared fictions between husband and wife that make the relationship work; this is just an affirmation that their subjective reality is different from the objective reality—and better, more functional, more fulfilling. A “scientific” relationship with another human being probably ends up looking like circling, all very factual, “I feel this right now” in immediacy, makes subjects crazy. The subjective approach is, as you say, entirely natural, the natural instinct of every normal human; it is the teaching of holy writ and entirely in accordance with honest philosophy. It is the WIS to the INT of scientism and objectivity.
This is reasonable. Peter Singer, the godfather of EA, talked a lot about 'the moral circle' that's centered on self and family. He doesn't argue for eliminating the concentric circles of concerns; he just argues for expanding them in a more thoughtful way, and in helping people outside one's inner circles in much more cost-effective ways rather than sentimental do-gooding ways.
Singer doesn't expect people to give away 95% of their net wealth to random Africans. He just hopes that if they donate to charity at all, they'll take into account whether the charity actually does anything useful, rather than just giving us a warm glow.
EA sounds like an attempt to apply Enron's accounting style to the world of morality. Counting assumed future gains to offset present day costs.
> If I am in a war against the Nazis, and I see a German helmet in my scope, containing a being which has experiences that matter, can I blow his experiences out through a fist-sized hole in the back of his head? If not, I cannot fight a war and the Nazis win.
Given how much they contributed to rocket technology (a necessary precursor to the future galactic civilization), that's probably not the only objection EA would have to fighting actual (as opposed to redneck) Nazis.
For the low cost of $.30 a day, you too can support a child of the elites. For your monthly donation you will recieve writings from the substack writer you choose to save!
>> Anything that interferes with that long-term vision is, to the best and brightest EAs, odious. We're aware of short-term/long-term tradeoffs in terms of happiness and suffering.
>> Of course, every authentically great religious leader has been a sentientist at heart.
Sermon on the Mount? It's just a standard revolutionary speech. I 100% buy Reza Aslan's Zealot thesis, and even before his book, I've been for years reading the New Testament through this lens of Jesus-as-a-revolutionary-leader. Like Che Guevara or Lenin. And we know that those were nice, moral people.
I actually think that the exact opposite is true: NO authentically great (whatever that means) religious or non-religious leader has been a sentienist at heart. If they tried to empathize with every person they were leading, they would have failed.
Meant to leave this on your previous post but I might as well leave it here:
As a recovering rationalist, I have a particular disdain for utilitarianism. Put simply: utilitarianism is a scam used by the person who decrees what the utility function is, to silence dissent towards their leadership.
Think about it. At the end of the day, what is utilitarianism? Utilitarianism is a philosophy that says "things with more utility are better than things with less utility". Ok, sure, fine, that's cool. What is utility? Utility is good things that make people happy, for some philosophical definition of happy.
Ok, sure, fine, that's cool. Who decides what that definition is? When two people disagree, who decides which one is right? In a normal conflict of values, the decision function is "power", as you've written about at length. In a utilitarian context? Well, in my experience, it mostly looks like the same kind of social mind games that the woke pull. And if utilitarian ethics reduces to 'social status manipulation games' whenever there's a conflict, then how is that not just tyranny with extra steps?
You've actually hit on one of my favourite examples.
> Personally, to save one of my own children, I might very well condemn all of Central Asia to death by trolley. To save them both I could throw in Brazil. I must be a very ineffective altruist, I’m afraid.
Every instance of utilitarianism I've ever seen, seems to value every human life equally and interchangeably. Utilitarianism seems to always say that one life in my neighbourhood is exactly equal to one life off in some far-flung jungle. Oddly, I can't seem to get to that point from "more utility is better", and yet somehow they always do
Here's my proposition: human lives are not equal, and there is more utility in saving lives that matter more, relative to saving lives that matter less. Not only are human lives not equal, but human lives don't even have an objective, concrete, singular value. The evaluated value of a human life depends on the social distance from the evaluator. My family's lives are more valuable than other peoples' lives, _to me_. This can be true even while recognizing at the same time that their lives are _not_ more valuable to other people.
"True" Utilitarianism would be agnostic on the question of whether or not my particular metric for utility is correct. It would simply say, taking my metric as an axiom, it is good to maximize utility. And yet, for some reason, if you were to, for example, go to an EA meeting and say some variant of "we all live in America. Fuck Africa. Who gives a shit if bushmen die of malaria. One American is worth a hundred of them, and so the greatest utility is to stop wasting money over there and spend it over here"... try it and let me know how it goes.
There is no principled, objective, _utilitarian_ way to make the judgement that my utility function is "wrong" but the "maximize lives saved" function is "right". None. It doesn't exist. But utilitarians will pretend it does exist, and invoke 'utility' to silence all viewpoints to the contrary. Ergo, utilitarianism is just a scam used by whomever arbitrarily decided what utility function we're using, to silence anyone who thinks a different function would be better
The utility function always seems geared to deliver "rational" justification for some subjectively chosen goal. It's "the ends justify the means" with extra steps.
One of the things that make Mike's life objectively better than Tom's might be that Mike cares more about his own kids than about some stranger's kids while Tom doesn't care more about his own kids than about some stranger's kids.
There aren't any Toms, of course. But more realistically: if you care more about people who are like you (even if they're strangers) than about people who aren't like you, then your life is better than the life of someone who doesn't care more about people who are like him than about people who aren't like him.
There are less arbitrary ways to go about it.
https://repository.lib.ncsu.edu/handle/1840.16/10464
This looks like a good read! Reminiscent of Friston. At first glance it seems that the thesis derives the notion of a utility function, which is categorically distinct from an aggregate utility function, which is the subject of Eidens critique.
It's very much related to Friston.
Correct, I'm not interested in defending any kind of benthamite utilitarianism at all, only a fairly generic and physically-informed version of the basic notions, which I believe is not only on much better grounds but also has the capacity to illuminate the provenance of preferences that, in more liberal social circles, would be seen as arbitrary.
(e.g. under this perspective it's universalist altruism that's superstitious, not tribalism, which is a reasonable response to the default, total natural disorganization of physical systems)
((Part of the subtext here is that I'm saying that the vast majority of self-styled rational utilitarians are trash at it))
It's really just the Calculation Problem all over again if you think about it. I feel like making it about 'sentientism' makes it easier to obscure this fact because animals are less complex and thus by including animals you simplify the problem on average - i.e. you can't say all humans prefer to exist but you can say that sentient creatures generally speaking prefer to exist.
This was one of Whewell's critiques of JS Mill.
"The math is definitely telling us, I feel, to own the libs." I LOLed.
So much passion in the replies here!
And more than a little defensiveness.
EA makes a lot of people uncomfortable because it highlights just how little serious thought they've given to (1) the ethical implications of other sentient lives being sentient, and (2) future/potential sentient lives being ethically important, and vastly out-numbering present lives.
I'm well aware that we're evolved to treat 99.9999+% of other sentient beings as non-sentient and unworthy of moral concern. I've been teaching evolutionary psychology since 1990. Parenting, nepotism, tribalism, and anthropocentrism run deep, obviously, and for good adaptive reasons. They're Lindy. Time-tested and battle-proven. But that doesn't mean they're ethical in any principled or aspirational way.
If natural = good, then modern leftist woke identity politics is also good, because runaway virtue-signaling, self-righteous moral panics, and performative sentimental collectivism are based on moral instincts that also run deep. But these moral instincts aren't always good. They're often just short-sighted, self-serving, and idiotic. The naturalistic fallacy can't adjudicate what kinds of morals and political aims are worth adopting, now, given modern technological civilizations and Darwinian self-awareness.
In case anybody is curious to learn a little more about Effective Altruism as it's actually practiced -- rather than some of the straw man portrayals here -- the course syllabus for my 'Psychology of Effective Altruism' course is here, including an extensive reading list and links to some good videos: https://www.primalpoly.com/s/syllabus-EA-2020-spring-final.docx
I come from a philosophical background. I even got a degree. I don't often mention it, because the obvious "haha a Big Mac with medium fries, please" jokes, which are mostly on point insofar as the real-world applicability of such degrees is concerned.
I am very much familiar with utilitarianism and all of its contemporary offshoots. Once upon a time, I also used to find something like "effective altruism" an interesting and even possibly appealing idea. I know Peter Singer very well. My lecturer in ethics (who was, of course, a vegan feminist) was a massive fan.
You should note that nowhere did I say in my original comment that simply because not caring about other people is natural, it is also "good", ethically speaking. So just cool it with the "naturalistic fallacy" remarks. What I DID say was that being an effective altruist is most likely not a very efficient survival strategy, evolutionarily speaking. I do not see you challenging that point in your comment.
And this is already a big problem. Any system of ethics essentially prescribes a certain pattern of behavior to be followed throughout life. The base level evaluation of the effectiveness of any pattern of behavior is how antifragile is it to evolutionary pressures. Rationally, your pattern of behavior might be perfectly logically consistent, deduced from axioms which every rational human being considers self-evident. Great. But if in practice it leads to you being pwned by free-riding egoists, it is absolutely worthless.
Jesus is cool and all, but he was a literal God. Dying on the cross to save humanity is God's work, not any man's.
You talk about the "short-termism" of modern society e.g. "We know that most intelligent life in the universe evolves to chase proxies for biological fitness (happy drugs, happy movies, happy careerism) rather than long-term fitness itself (good babies & civilizations)." but how are you going to fight that? What does "good" babies & civilizations mean? Are there, perhaps, "bad babies"? And even whole civilizations which are bad? "Bad" things supposedly shouldn't exist, right? Is this *gasp* eugenics?!
The so-called "real" existential threats cited by you are actually pretty well publicized already. By now everyone and their grandmother has seen an interview with Elon Musk or some other random tech entrepreneur ringing the bell on "existential AI risk". Everyone knows about nukes - BY FAR the most publicized "existential threat" over the 20th century. And bioweapons are still one of the favourite excuses of America when it talks about "spreading democracy" somewhere.
Here is one truly unpublicized existential threat - collapsing fertility rates in all developed societies. Given EA's preoccupation with *future* generations, shouldn't this be your top priority? How do we deal with it?
Brief reply: of course some babies and some civilizations are better than others. We wouldn't have evolved to do intuitive mate choice for indicators of genetic quality if offspring didn't differ in genetic quality in ways that matter to their future lives, and to their potential reproductive success in turn.
And we bother worrying about the current state of our civilization if some civilizations weren't better than others. We wouldn't be here on Gray Mirror debating how to improve and defend our civilization if all civilizations were equal.
"EA makes a lot of people uncomfortable because it highlights just how little serious thought they've given to (1) the ethical implications of other sentient lives being sentient, and (2) future/potential sentient lives being ethically important, and vastly out-numbering present lives."
It's rather the opposite.
EA's (2) ("future/potential sentient lives being ethically important") only became a thing at the moment in history were people care more about themselves and less of the polis, future generations, or even their own spouses and offspring.
It's a narcissistic compensation to the above - same as "caring for the people in Africa" is more oftent than not copout for not caring for those in one's life and one's neighborhood (and a career if you can get in on the racket).
As for (1) it's, in the same manner, what happens when people are less than ever connected with nature (while it pretends to be all about connecting with it).
People living close to nature, near animals, and having animals of their own to feed and tend (like humans in rural areas did for millenia), can kill an animal with nary a second thought. They'll gladly eat animals too.
Urbanite Starbucks dwellers for whom animals have always been things in Disney movies, documentaries, or, at best, their cats and dogs, on the other hand, turn vegan and go on about "the ethical implications of other sentient lives being sentient".
In this sense, utilitarianism and EA ends up being modern mystification and compensation.
> In case anybody is curious to learn a little more about Effective Altruism as it's actually practiced --
I'm not going to read your link, so if that is sufficiently bothersome to you, feel free to ignore this comment
But I take exception to this. I'm not speaking for any other commenters. But what I am addressing is in fact EA as it's actually practiced.
An academic course on a subject is rarely if ever representative of how things are _actually_ practiced, in the real world. It doesn't really matter what the Official™ definition of EA is. "EA as it is actually practiced" means, literally and directly, "all and only the actions, as taken by people who self-identify as members of EA, that are claimed to be motivated by EA"
I've been reading LW and SSC for almost a decade now. I have encountered lots and lots of self-identified EAs. I have heard lots and lots of the ideas that they have posted, ideas that they claim are motivated by EA and designed to achieve EA goals. My comments are in reference to those people, those ideas, and my experiences thereof.
My core claim is that "utility" is poorly defined. In fact, not only is it poorly defined, it is not objectively defineable in principle. At best you can say "assuming we all agree that X and Y and Z, A would generate more utility than B". I disagree with X and Y and Z, and there is no way, not even in principle, to use utilitarianism to adjudicate the disagreement.
This would be fine, if we all agreed with X and Y and Z. This would also be fine if EAs scoped their claims to "assuming we all agree with X and Y and Z". But that's not what happens in practice. What happens in practice is that they will assert (usually somewhat indirectly) that X and Y and Z are obviously and self-evidently justified by utilitarianism, and the discussion ends there.
To use the concrete example:
assuming we all agree that each human life is of equal utility value for the purposes of saving lives, then the most optimal use of altruism is mosquito nets
I reject that each human life is of equal utility value
Every conversation I have ever had on this subject ends with my counterparty saying some variant of "yeah but those lives are easier to save, so you can save more of them, and more lives is bigger than less lives, so that is more optimal!" This presupposes what I'm rejecting: that 2 lives is more utility than 1 life, regardless of other factors.
As an aside: just wanted to let you know that, years ago, the book and podcast you recorded with Tucker Max was a major positive influence on my life. Thank you for putting that into the world
You're welcome for the Mate book and Mating Grounds pocast. Glad they were useful! My dating advice for single guys should, I hope, be less controversial than EA seems to be.
To be clear, it was helpful for improving my mindset. It was not helpful for finding dates or partners. And, incidentally, if you ever release an updated version: "Move to Austin" is no longer good advice. It is very, very bad advice for dating
Austin does seem to have changed a lot since we recorded that advice c. 2014. I still love it, but it's got a different vibe.
> the ethical implications of other sentient lives being sentient, and (2) future/potential sentient lives being ethically important, and vastly out-numbering present lives.
You haven't actually explained why I should care, you're just stating a fact (other lives are sentient) that I already know, and asserting that it matters. But why? Can you give me a concrete reason why I should care if some guy in Madagascar is having a bad day? Let's posit that I don't care. Where do we go from here?
It's OK to bite the bullet and say 'I simply don't care about other beings suffering, even though I could reduce their suffering'.
As long as you're honest with yourself that you don't care.
To me, that's a deeply unethical view of life, but YMMV.
The problem as I see it is thus:
Caring about the suffering of arbitrary other sentient beings is not a Nash equilibrium, which means it is susceptible to holiness spirals. *You* donate 10% of your income to effective charities, but *I* donate 11%. Why is 10% a principled choice? How can you justify any personal experience of pleasure when that time or money could be used to help suffering beings?
Therefore, EA is a recipe for feeling guilty all the time, and if I wanted that I'd go with a more developed tradition like Catholicism. The alternative would be to extend your personal locus of control to all of humanity. Do you agree or disagree with the following statement: "If I (or a person of my choosing) were appointed *dictator* of the human race, there would be more utility in the world". If you disagree, why do you think that it would not be a good course of action?
The essence of his argument is passing moral judgement on you:
> To me, that's a deeply unethical view of life, but YMMV.
You're a bad immoral person, while I'm a good moral person! I win altruistic virtue signalling points! And I get the cool friends and a hot wife to reproduce with! Heck yeah, evolution built me to virtue signal my altruism so hot chicks can know I'm the good guy that makes good babies!
Goodhart's law applies here, altruism *was* a good fitness signal, but it isn't anymore because it can be faked in really bad ways for our commons. Or perhaps it just breaks down when your group is bigger than 50 people (hunter-gatherers). We're not evolved enough yet for living in civilizations. The buck stops at our genes, all behaviour is rooted in genes, genetics is destiny, and long-term outcomes are completely determined by our genes. Our genes are faulty. Or perhaps mine aren't (re virtue signalling), but the average is still crap.
The path to defeating this sophistry is to expose it for what it is, and to say that you don't fucking care. And then it will spread as more people join in.
How did we go from him not caring about a guy in Madagascar to him not caring about other beings suffering? Also, why are those the only options being considered?
Sentient lives are important...BECAUSE THEY JUST ARE, OKAY!?
The problem is, in the end none of this matters. No matter how far into the future you try to optimize your utility function, the result will always be multiplied by the big zero that is the heat death of the universe.
Unless, of course, there does exist an eternal reality which transcends our present material existence. If there is even the slightest chance that such a reality exists, then the only rational, utility maximizing path is one which involves genuine spiritual seeking to find it.
(Yes, I am familiar with Pascal's Mugger. The thing is, Pascal was (more or less) right, and rationalists have fundamentally misunderstood the dilemma).
But in the event that there is no eternal form of sentient existence, I see no reason to believe that transcendent ethical norms (like a norm to maximize utility) exist either. If such is the case, my preference would be live a life that feels meaningful and has joy. Such a life would likely involve concern and even sacrifice for others, but I would see no point in burning myself out trying to win a game in which there is no referee and no one keeping score.
I think if you put Toby Ord and Yarv in a room together and locked the door, you’d get Ord to agree to something like “owning the libs is the most effective altruism”. Would you agree that regime change in USA of the kind conceived here is the imperative of an Eff Alt?
One might argue for example that this list by Ord: https://80000hours.org/2020/04/longtermist-policy-ideas/ can best be achieved, or is only really likely to be achieved, by a regime of the kind that Yarvin has conceived here on GM.
I think there's a strong case that 'owning the libs' is a legit longtermist policy idea. Whether EA as a millennial semi-academic community can be convinced of this is an open question.
I'm not sure EA makes anyone uncomfortable
It makes me wanna puke. Nausea's a kind of discomfort.
Largely unrelated, but I wouldn't be worried about the great filter. It seems most likely that we're just early birds on the cosmic stage.
We agree that meaningful talk of natures presupposes a world-forming Self who wants things to exist in certain ways.
"But Who's To Say what God wants?"
"Your mom, that's who."
"EAs understand that most people are not, and may never be, capable of rational utilitarian reasoning."
I am sorry, but these people are retarded.
There is nothing "irrational" about caring vastly more about those close to you than other people. In fact, such behaviour, prioritizing the well being of those who are most closely related to us is profoundly rational from a genetic and evolutionary point of view. Have these people read Dawkins at least? Caring about unrelated people as much as about related people ends up with your genes being bred out of the genepool. Which is why we have billions and billions of people alive with obscenely xenophobic and tribalistic behaviour and very, very few nerds with effectively altruistic behaviour. These people are not even meant to be, they are nature's accidents, because nature abhors such blind universalist altruism.
You should make a post "Effective Altruism considered retarded" like you did a decade ago about Futarchy.
But on the general topic of long-term philosophy - it is not about ethics. Every single one of the little sects mentioned by Mr. Miller (NeoReactionaries. Traditionalists. Monarchists. Nationalists. Religious pronatalists. Scholars. Crypto dudes. Ornery post-libertarians. Post-Rationalists.) do indeed have a long-term vision. They ALL believe they want to maximize long-term value. Even the libs, as you mention, Curtis. And yet they disagree so much with regards to how that long-term vision looks like. Why? Aren't they all trying to maximize "human value"?
When you drill deep enough into ethics, you realize it is all about aesthetics. All of these little sects simply have their own different views on what they consider "beautiful", what they consider a "good life" and of course - "human value". And it is all about aesthetic preference. This is not aesthetics in the narrow craft sense as in what music genre or architectural style you like. This is "broad" aesthetics - the aesthetics of human life, the aesthetics of being.
Crypto-anarchists have one view on life aesthetics. Communists have the near opposite, white nationalists are vastly different from both, and the libs are very different in their live aesthetics from all of them combined.
The history of human civilizations is different peoples taking turns enforcing their own views of aesthetics on one another.
Yes, but beauty is real, and some people perceive it more clearly than others do.
(Of course, every authentically great religious leader has agreed with me at heart. Christ’s Sermon on the Mount is pure agreement with me. If you don’t agree with me, you'll hate Christianity. And, yes, there's an active sub-sub-culture of Christians who agree with me.)
This is what I really hate about rationalism-adjacent rhetoric. They all imagine themselves to be preaching to stupid hicks or at best midwits.
I wouldn't bother to say anything on this site at all if I thought I was only talking to hicks or midwits.
I mean...You basically said Jesus would have held your position. I can't think of a world where that helps your argument unless your expect a significant portion of the audience to have double-digit IQs.
Engaging with "EA" and adjacent claptrap as if they were stand-alone ideas (like, say, a purported proof of the Poincaré conjecture) is a catastrophic mistake.
Political ideas do not stand alone. More than anything it matters *who* is pushing something. And it isn't any kind of secret that "EA", "rationalism", etc. are effectively a rerun of the Comintern -- with SV oligarchs (and their pet Yudkowskys) in the role of the Party.
Biting a hook is a bad move for a fish, observe, *not* because of any objective defect of *the hook*.
EAism provides the familiar Christian toolkit for committing atrocities with a clear conscience (with "moral imperative", even), with ready justifications for arbitrarily dystopian reichs. As well as stoking the narcissistic "we", with a rich selection of "we must..."s satisfying for tenured professors and redditards alike.
But this is simply a description of the mechanisms whereby the ethics-flavoured rootkit wrecks your brain. More interesting is to look at *who* is trying to install it in your head; and the *why* quickly becomes apparent.
Categorical rejection of ideas on account of their source is not, as the "enlightened" sociopaths want you to believe, an act of folly. Rather it is your first and often only line of defense against a caste of parasites who see your mind as their plaything, your will and personal sense of right and wrong as an obstacle, and (openly!) equate you with a broiler chicken.
Main thing why I've been always skeptical of the EA is that I haven't seen them attacking leftism.
All kinds of lefties have caused huge problems all over the world. From the old hardcore worker commies to the current woke libtards, they all create misery and ruin societies.
The root cause for a lot of suffering in the world is leftism. That's why reducing its impact should be one of the main priorities of effective altruism. But for some reason, it's not...
I agree that this is a big problem in EA. It's tried to stay relatively non-political. But given that it's dominated by academically elite Millennials and Gen Z, there's a lot of implicit wokeness.
Many times the initial reasoning of effective altruists is sound. A group of people is missing something cheap which would make their lives better. Maybe heal some illnesses or prevent them happening in the first place.
But my question is: If the solution is so easy and cheap, why these people are not using it already?
There is usually something much more serious going on.
If you give them a cheap medicine, they might not die from that particular illness anymore. But because nothing was done to fix the root cause, they will probably die from something else. Or they will live longer but their life still sucks.
That's why it would be important to dig deeper to get to the root cause and fix it. If you do that, then lots of other secondary problems will be fixed automatically without any effort.
If you do an honest evaluation of the situation, very many times you will see how the root cause is more or less related to leftism.
That's why, in my eyes, if a person wants to really be an effective altruist, the best bang for the buck would come from effectively opposing leftism in its all forms.
The well-being of the whole society depends on politics. That's why it sounds a little bit weird that effective altruists try to stay non-political.
This sounds deeper than it is. The root cause of millions of people dying of malaria is malaria and mosquitoes, and if you eliminate them by donating more to Anti Malaria Foundation (rather than, say, Make a Wish foundation for terminally ill kids), then those people will live longer, healthier, more productive, less exhausted lives. Sure, they'll die of other stuff later, but in the meantime, their lives and societies are better. Sometimes these things are just pragmatic problems that don't need a huge overlay of geopolitics or moral philosophy.
I think eliminating smallpox was a clear net win for humanity. Likewise for eliminating malaria, or river blindness, or Covid.
"The first is underestimating how much attention the EA subculture already gives to these psychological issues—the mismatch between human nature as it is, and how an ‘ideal utilitarian’ would ideally act. (...) We have no illusions that people are naturally equipped to be good at utilitarian reasoning or EA. We don't even think that most people could ever get very good at it."
And still, this manages to miss the issue, even while it explicitly states it (albeit reversed): it's not humans who aren't good at utilitarian reasoning, it's rather those good at utilitarian reasoning that aren't good at "human".
Sure, EA advocates are humans in the sense that they're like you and me (especially the nerdier of us), in that they have rights, in that if you prick them they will bleed, and so on. Some would probably even ace a Voight-Kampff. But they aren't quite there in several ways that matter.
They're part of a subset of people - I'll include me - that struggle to human (what with autism, being a nerd, late stage capitalism, and so on). The difference is that EAs, along with a number of other factions, consider this an advantage, and make a project out of it. Kind of like incels making a project out of being unable to get laid.
The problem is that by not getting humans in general, they also not get what humans want, what makes humans happier (heck, not even what makes them mentally healthy). They surely don't get what makes humans tick, and they can barely understand societies (at best they can analyze them, and make some abstract maps, which they mistake for the territorry).
EA recipes are as if an alien race with a totally different civilization, habbits, priorities, and values comes to teach humans what to do "to be happy". It doesn't matter if they're more intelligent. It doesn't even matter if their recipes can work in a human context, or at least be enforced.
Let's even say they're better than "normies". They're, say, to most, what a Vulcan is to a human. Still, Vulcans teaching humans what they should do and how to be happy is a recipe for disaster, that only a nerd with illusions of grandeur and/or revenge fantasies would consider viable. Vulcans don't get humans. Humans get humans.
The best Vulcans, a minority, can do, is to try to learn to human in a human society. Humans don't want and don't need their advice (nor will they listen to it. Kind of like those seeing don't need a blind man's advice on how to paint - or even what color their house should be.
Contra Geoffrey on EA:
There is a general understanding in our ever-so-scientific culture that objectivity equals good and subjectivity equals bad. You yourself have written plenty about “scientific public policy”, the Cathedral, etc. I feel that this age sorely needs some Kierkegaardian subjectivity, the realization that objective relations are for objects and subjective relations are for subjects. Objectively speaking, one wife is less than ten tribesmen—but to me, subjectively, one wife is infinitely greater. And that’s a good thing.
Geoffrey writes:
“Christ’s Sermon on the Mount is pure sentientism. If you don’t like sentientism, you'll hate Christianity.”
As a Christian, or at least a Mormon if we don’t count as that, I can confidently say that the second commandment in the law is “thou shalt love thy neighbor as thyself.” The words “thy neighbor” are significant. Christ did not tell me to love ten tribesmen as I do my wife—on one hand are strangers, on the other a sort of super-neighbor, close both geographically, emotionally, and familiarly. The true divine wisdom is to love one’s neighbor. Of course, neighborliness takes many forms; Christ gives the parable of the good Samaritan to remind us that geographic neighbors have a claim on our goodwill even if they are not our ethnic or religious neighbors. But I digress.
The experience of a fly is real (imo), but the fly is hardly my neighbor. My duties extend in concentric growing circles. First is my duty to my own center, that is, to God. Second is to myself and my wife, who is bone of my bone and flesh of my flesh. Then to my children (first is on the way) and family, then my community, nation, and finally the whole human race. Maybe outside that is a another circle for higher mammals and then another for vertebrates, but at this point the circle has grown so diffuse that it is almost meaningless and simple hunger on the part of an inner circle’s members takes precedence over their very lives.
I think this is the only human way to live inasmuch as the human is a subject and not an object. Kierkegaard uses the formula “subjectivity is truth”, and I think this well summarizes the whole affair. You write of shared fictions between husband and wife that make the relationship work; this is just an affirmation that their subjective reality is different from the objective reality—and better, more functional, more fulfilling. A “scientific” relationship with another human being probably ends up looking like circling, all very factual, “I feel this right now” in immediacy, makes subjects crazy. The subjective approach is, as you say, entirely natural, the natural instinct of every normal human; it is the teaching of holy writ and entirely in accordance with honest philosophy. It is the WIS to the INT of scientism and objectivity.
I'd say a person has a much deeper duty to his dog than to some stranger five thousand miles away.
This is reasonable. Peter Singer, the godfather of EA, talked a lot about 'the moral circle' that's centered on self and family. He doesn't argue for eliminating the concentric circles of concerns; he just argues for expanding them in a more thoughtful way, and in helping people outside one's inner circles in much more cost-effective ways rather than sentimental do-gooding ways.
Singer doesn't expect people to give away 95% of their net wealth to random Africans. He just hopes that if they donate to charity at all, they'll take into account whether the charity actually does anything useful, rather than just giving us a warm glow.
EA sounds like an attempt to apply Enron's accounting style to the world of morality. Counting assumed future gains to offset present day costs.
> If I am in a war against the Nazis, and I see a German helmet in my scope, containing a being which has experiences that matter, can I blow his experiences out through a fist-sized hole in the back of his head? If not, I cannot fight a war and the Nazis win.
Given how much they contributed to rocket technology (a necessary precursor to the future galactic civilization), that's probably not the only objection EA would have to fighting actual (as opposed to redneck) Nazis.
As QC pointed out in his interview with Eigenrobot (https://lnns.co/Fx1brpdkgTE) the problem with Effective Altruism is that it is essentially Scrupulosity (https://en.wikipedia.org/wiki/Scrupulosity) for atheists.
For the low cost of $.30 a day, you too can support a child of the elites. For your monthly donation you will recieve writings from the substack writer you choose to save!
How does EA benefit me and mine? Only question I want answered.
Im not owning the libs because I enjoy it ok, im owning the libs because we have to save the world. I do also enjoy it though.
>> Anything that interferes with that long-term vision is, to the best and brightest EAs, odious. We're aware of short-term/long-term tradeoffs in terms of happiness and suffering.
These EAs sure sound like the libs.
>> Of course, every authentically great religious leader has been a sentientist at heart.
Sermon on the Mount? It's just a standard revolutionary speech. I 100% buy Reza Aslan's Zealot thesis, and even before his book, I've been for years reading the New Testament through this lens of Jesus-as-a-revolutionary-leader. Like Che Guevara or Lenin. And we know that those were nice, moral people.
I actually think that the exact opposite is true: NO authentically great (whatever that means) religious or non-religious leader has been a sentienist at heart. If they tried to empathize with every person they were leading, they would have failed.