If they were spawned into existence for this thought experiment, then the human, probably.
But if even one of those kittens were mine, entire cities could be leveled before I let anyone hurt my kitten.
I wouldn’t think it moral to save my kitten over a random non-evil person, but I’d still do it.
It wouldn't just hurt your partner, it would hurt you.
We know that following a "objective morality" the 10 people would be a better choice but it would hurt (indirectly) you.
Humans aren’t perfectly objective.
Allowing that kitten to live will cause untold suffering for other small mammals.
You have the power to stop that suffering!
It's not quite "one million to one"; the meat from 1 million rabbits meets the caloric needs of around 2750 people for 1 year.
How about one million humans or one kitten?
Where is the cut-off point for you>?
In any case, I just wanted to point out that if you cared about the welfare of damn arthropods, you're going nowhere really fast.
Consider this: the quickest, surest, most efficient way and ONLY way to reduce all suffering on earth to nothing forever and ever is a good ole nuclear holocaust.
I feel you're still missing the point. I get you might be coming from a binary perspective, as evidenced by going to a nuclear argument, i.e. why bother talking about anything else, but I highly doubt that's the goal of the author. They are trying to make you imagine, and think, about how things fit together. YRMV.
As the article suggests, imagine you must live the lifetime of 1 million factory farmed shrimps. Would you then rather people quibble over whether we should hunt whales to extinction and ultimately do nothing (including never actually hunting whales to extinction to save you because they don't actually care about you), or would you rather they attempt to reduce your suffering in those millions of deaths as much as possible?
I don't think we can say the same of shrimp.
That's why humane killing of cattle (with piston guns to the head) is widely practiced, but nothing of the sort for crabs, oysters, etc. We know for sure cattle feel pain so we do something about it.
In reality, nobody would actually choose to save the life of 34 crustaceans over the life of a human, even if killing the prawn results in 102% of the suffering of killing the human.
It's the same with all the EA stuff like prioritising X trillion potential humans that may exist in the future over actual people who exist now - you can get as granular as you want and mess around with probability to say anything. Maybe it's good to grow brains in vats and feed them heroin - that'll increase total happiness! Maybe we should judge someone who has killed enough flies the same as a murderer! Maybe our goals for the future should be based on the quadrillion future sentient creatures that will evolve from today's coconut crabs!
> ... if [shrimp] suffer only 3% as intensely as we do ...
Does this proposition make sense? It's not obvious to me that we can assign percentage values to suffering, or compare it to human suffering, or treat the values in a linear fashion.
It reminds me of that vaguely absurd thought experiment where you compare one person undergoing a lifetime of intense torture vs billions upon billions of humans getting a fleck of dust in their eyes. I just cannot square choosing the former with my conscience. Maybe I'm too unimaginative to comprehend so many billions of bits of dust.
It's pretty short, I liked it. Was surprised to find myself agreeing with it at the end of my first read.
* He slides between talking about personal decisions vs decisions about someone else. The argument for Headache is couched in terms of whether an average person would drive to the chemist. Whilst the argument for shifting from Headache to Many Headaches is couched in terms of decisions made by an external party. This feels problematic to me. There may be some workaround.
* He describes rejecting transitivity as being overwhelmingly implausible. Is that obvious? Ethical considerations ultimately boil down to subjective evaluations, and there seems no obvious reason why those evaluations would be transitive.
If we have to use math, I'd say: the headaches are temporal - the effect of all the good you've done today is effectively gone tomorrow one way or another. But killing a person means, to quote "Unforgiven", that "you take away everything he's got and everything he's ever gonna have". So the calculation needs at least a temporal discount factor.
I also believe that the examples are too contrived to be actually useful. Comparing a room with one person to another with five million is like comparing the fine for a person traveling at twice the speed limit with that of someone traveling at 10% the speed of light - the results of such an analysis are entertaining to think about, but not actually useful.
Thus, when comparing headaches to a man being tortured, there's no clear reason to suppose that there is a number of headaches that is worse than the torture.
(Intuitively, it's hard to say saving 100 people is 100x as good as saving 1, because we can't have 100 best friends, but it doesn't affect the math at all)
Personally, I believe that you can't just add up mildly bad things and create a very bad thing. For example, I'd rather get my finger pricked by a needle once a day for the rest of my life than have somebody amputate my legs without anesthesia just once, even though the "cumulative pain" of the former choice might be higher than that of the latter.
Having said that, I also believe that there is sufficient evidence that shrimp suffer greatly when they are killed in the manner described in the article, and that it is worthwhile to prevent that suffering.
Donating to Sudanese refugees sounds like a great use of money. Certainly not a waste.
Suboptimal isn't the same as wasteful. Suppose you sit down to eat a great meal at a restaurant. As you walk out, you realize that you could have gotten an even better meal for the same price at the restaurant next door. That doesn't mean you just wasted your money.
>ghoulish and horrific math
It's not the math that's horrific, it's the world we live in that's horrific. The math just helps us alleviate the horror better.
Researcher: "Here's my study which shows that a new medication reduces the incidence of incredibly painful kidney stones by 50%." Journal editorial board: "We refuse to publish this ghoulish and horrific math."
I agree this is unintuitive, but I submit that's because of speceisism. What about shrimp makes it so that tens of millions of them painfully dying is less bad than a single human death? It doesn't seem like the fact that they aren't smart makes their extreme agony less bad (the badness of a headache doesn't depend on how smart you are).
If it's sophistry anyway, can't you take Eliezer's position and say God doesn't exist, and some CEV like system is better than Bentham style utilitarianism because there's not an objective morality?
I don't think CEV makes much sense, but I think you're scoring far less points that you think you are even relative to something like that.
ETA: see also the McNamara fallacy https://en.wikipedia.org/wiki/McNamara_fallacy
- they suffer
- we are good people who care about reducing suffering
- so we spend our resources to reduce their suffering
And some (most!) people balk at one of those steps
But seriously, pain is the abstraction already. It's damage to the body represented as a feeling.
The reason is that the structure of the nervous systems of arthropods is quite different from that of the vertebrates. Comparing them is like comparing analog circuits and digital circuits that implement the same function, e.g. a number multiplier. The analog circuit may have a dozen transistors and the digital circuit may have hundreds of transistors, but they do the same thing (with different performance characteristics).
The analogy with comparing analog and digital circuits is quite appropriate, because parts of the nervous systems that have the same function, e.g. controlling a leg muscle, may have hundreds or thousands of neurons in a vertebrate, which function in an all-or-nothing manner, while in an arthropod the equivalent part may have only a few neurons that function in a much more complex manner in order to achieve fine control of the leg movement.
So typically one arthropod neuron is equivalent with much more vertebrate neurons, e.g. hundreds or even thousands.
This does not mean that the nervous system of arthropods is better than that of vertebrates. They are optimized for different criteria. Neither a vertebrate can become as small as the smallest arthropods, nor an arthropod can become as big as the bigger vertebrates, the systems that integrate the organs of a body into a single living organism, i.e. the nervous system and the circulatory and respiratory systems, are optimized for a small size in arthropods and for a big size in vertebrates.
I'm fairly puzzled by sensation/qualia. The idea that there's some chemical reaction in my brain which produces sensation as a side effect is very weird. In principle it seems like you ought to be able to pare things down in order to produce a "minimal chemical reaction" for suffering, and do "suffering chemistry" in a beaker (if you were feeling unethical). That's really trippy.
People often talk about suffering in conjunction with consciousness, but in my mind information processing and suffering are just different phenomena:
* Children aren't as good at information processing, but they are even more capable of suffering.
* I wouldn't liked to be kicked if I was sleeping, or blackout drunk, even if I was incapable of information processing at the time, and had no memory of the event.
So intuitively it seems like more neurons = more "suffering chemistry" = greater moral weight. However, I imagine that perhaps the amount of "suffering chemistry" required to motivate an organism is actually fairly constant regardless of its size. Same way a gigantic cargo ship and a small children's toy could in principle be controlled by the same tiny microchip. That could explain the moral weight result.
Interested to hear any thoughts.
The sensation of pain is provided by dedicated sensory neurons, like other sensory neurons are specialized for sensing light, sound, smell, taste, temperature, tactile pressure, gravity, force in the muscles/tendons, electric currents, magnetic fields, radiant heat a.k.a. infrared light and so on (some of these sensors exist only in some non-human animals).
The pain-sensing neurons, a.k.a. nociceptors, can be identified anatomically in some of the better studied animals, including humans, but it is likely that they also exist in most other animals, with the possible exception of some parasitic or sedentary animals, where all the sense organs are strongly reduced.
So all animals with such sensory neurons that cause pain are certain to suffer.
The nociceptors are activated by various stimuli, e.g. either by otherwise normal stimuli that exceed some pain threshold, e.g. too intense light or noise, or by substances generated by damaged cells from their neighborhood.
What specifically makes it so the pain neurons cause pain and the pleasure neurons cause pleasure? Supposing I invented a sort of hybrid neuron, with some features of a pain neuron and some features of a pleasure neuron -- is there any way a neuroscientist could look at its structure+chemistry and predict whether it will produce pleasures vs pain?
It is likely that it only matters where they are connected in the sensory paths that carry the information about sensations towards the central nervous system. Probably any signal coming into the central nervous system on those paths dedicated for pain is interpreted as pain, like a signal coming through the optical nerves would be interpreted as light, even when it would be caused by an impact on the head.
That is too strong a statement to just toss out there like that. And I don’t even think it’s true.
I think children probably feel pain more intensely than adults. But there are many more dimensions to suffering than pain. And many of those dimensions are beyond the ken of children.
Children will not know the suffering that comes from realizing that you have ruined a relationship that you value, and it was entirely your fault.
They will not know the kind of suffering that’s behind Imposter Syndrome after getting into MIT.
Or the suffering that comes from realizing that your heroin addiction will never be as good as the first time you shot up. Or that same heroin addict knowing that they are betraying their family, stealing their mother’s jewelry to pawn, and doing it anyway.
Or the suffering of a once-great athlete coming to terms with the fact that they are washed up and that life is over now.
Or the suffering behind their favorite band splitting up.
Or the suffering behind winning Silver at the Olympics.
Or the agony of childbirth.
Perhaps most importantly, one of the greatest sorrows of all: losing your own child.
Et cetera
We’re talking about a scale here where we have to question whether the notion of suffering is applicable at all before we try to put it on any kind of spectrum.
Rethink Priorities [0] has a FAQ entry on this [1].
[0]: https://rethinkpriorities.org/research-area/welfare-range-es...
[1]: https://forum.effectivealtruism.org/posts/Mfq7KxQRvkeLnJvoB/...
The priors referenced in the “technical details” doc (and good lord, why does everything about this require me to dereference three or four layers of pointers to get to basic answers to methodological questions?) appear to be based entirely on proxies like:
> Responses such as moving away, escaping, and avoidance, that seem to account for noxious stimuli intensity and direction.
This is a proxy that applies to slime molds and Roombas, yet I notice that neither of those made the table. Why not?
I suspect that the answer is that at least when it comes to having zero neurons, the correlation suddenly becomes pretty obviously reliable after all.
There is a simple explanation for the confusion that this causes you and the other people in this thread: suffering's not real. It's a dumb gobbledygook term that in the most generous interpretation refers to a completely subjective experience that is not empirical or measurable.
The author uses the word "imagine" three times in the first two paragraphs for a reason. Then he follows up with a fake picture of anthropomorphic shrimp. This is some sort of con game. And you're all falling for it. He's not scamming money out of you, instead he wants to convert you to his religious-dietary-code-that-is-trying-to-become-a-religion.
Shrimp are food. They have zero moral weight.
Look, I’m not going to defend the author here. The linked report reads to me like the output of a group of people who have become so insulated in their thinking on this subject that they’ve totally lost perspective. They give an 11% prior probability of earthworm sentience based on proxies like “avoiding noxious stimuli”, which is… really something.
But I’m not so confused by a bad set of arguments that I think suffering doesn’t exist.
You've experienced this mystical thing, and so you know it's true?
> They give an 11% prior probability of earthworm sentience
I'm having trouble holding in the laughter. But you don't seem to understand how dangerously deranged these people are. They'll convert you to their religion by hook or crook.
I would hesitate to use that word myself, though my personal experiences have, at times, been somewhat similar to those who do use the word.
Suffering is experience, and my own internal experiences are the things that I can be most certain of. So in this case, yes. I don’t know why you’re calling it “mystical” though.
> They'll convert you to their religion by hook or crook.
I have a lot more confidence in my ability to evaluate arguments than you seem to.
I don't have much to say about the shrimp, but I find it deeply sad when people convince themselves that they don't really exist as a thinking, feeling thing. It's self repression to the maximum, and carries the implication that yourself and all humans have no value.
If you don't have certain measurable proof either way, why would you choose to align with the most grim possible skeptical beliefs? Listen to some music or something - don't you hear the sounds?
There is nothing edgy about it. You can't detect it, you can't measure it, and if the word had any applicability (to say, humans), then you're also misapplying it. If it is your contention that suffering is something-other-than-subjective, then you're the one trying to be edgy. Not I.
The way sane, reasonable people describe subjective phenomena that we can't detect or measure is "not real". When we're talking about decapods, it can't even be self-reported.
> but I find it deeply sad when people convince themselves that they don't really exist as a thinking, feeling thing. It's self repression to the maximum,
Says the guy agreeing with a faction that seeks to convince people shrimp are anything other than food. That if for some reason we need to euthanize them, that they must be laid down on a velvet pillow to listen to symphonic music and watch films of the beautiful Swiss mountain countryside until their last gasp.
"Sad" is letting yourself be manipulated so that some other religion can enforce its noodle-brained dietary laws on you.
> If you don't have certain measurable proof either way
I'm not obligated to prove the negative.
You do feel pain and hunger, at least to the extent you experience touch. You can in fact be even more certain of that than anything conventionally thought to be objective, physical models of the world, for it is only through your perception that you receive those models, or evidence to build those models.
The notion of suffering used in the paper is primarily with respect to pain and pleasure.
Now, you may deny that shrimp feel pain and pleasure. It's also possible to deny that other people feel pain and pleasure. But you do feel pain and pleasure, and you always engage in behaviors in response to these sensations; your senses also inform you secondarily that many other people abide by similar rules.
Many animals like us are fundamentally sympathetic to pain and pleasure. That is, observing behavior related to pain and pleasure impels a related feeling ourselves, in certain contexts, not necessarily exact. This mechanism is quite obvious when you observe parents caring for their young, herd behavior, etc.. With this established, some people are in a context where they are sympathetic to observed pain and pleasure of nonhuman animals; in this case shrimp rather than cats and dogs, and such a study helps one figure out this relationship in more detail.
Eh, perhaps we can’t detect it perfectly reliably, but we can absolutely detect it. Go to a funeral and observe a widow in anguish. Just because we haven’t (yet) built a machine to detect or measure it doesn’t mean it doesn’t exist.
If your definition of suffering describes both the widow grieving a lost husband and a shrimp slowly going whatever the equivalent is of unconscious in an icewater bath... it doesn't much seem to be a useful word.
> Just because we haven’t (yet) built a machine to
Yes, because we haven't built the machine, we can't much tell if the widow is in "anguish" or is putting on a show for the public. Some widows are living their most joyous days, but they can't always show it.
If the teenager gets a job offer, but the job only pays minimum wage, they may judge that the disutility for so many hours of work actually exceeds the positive utility from the PS5. There seems to be a capability to estimate the disutility from a single hour of work, and multiply it across all the hours which will be required to save enough.
It would be plausible for the teenager to argue that the disutility from the job exceeds the utility from the PS5, or vice versa. But I doubt many teenagers would tell you "I can't figure out if I want to get a job, because the utilities simply aren't comparable!" Incomparability just doesn't seem to be an issue in practice for people making decisions about their own lives.
Here's another thought experiment. Imagine you get laid off from your job. Times are tough, and your budget is tight. Christmas is coming up. You have two children and a pet. You could get a fancy present for Child A, or a fancy present for Child B, but not both. If you do buy a fancy present, the only way to make room in the budget is to switch to a less tasty food brand for your pet.
This might be a tough decision if the utilities are really close. But if you think your children will mostly ignore their presents in order to play on their phones, and your pet gets incredibly excited every time you feed them the more expensive food brand, I doubt you'll hesitate on the basis of cross-species incomparability.
I would argue that the shrimp situation sits closer to these sort of every-day "common sense" utility judgments than an exotic limiting case such as torture vs dust specks. I'm not sure dust specks have any negative utility at all, actually. Maybe they're even positive utility, if they trigger a blink which is infinitesimally pleasant. If I change it from specks to bee stings, it seems more intuitive that there's some astronomically large number of bee stings such that torture would be preferable.
It's also not clear to me what I should do when my intuitions and mathematical common sense come into conflict. As you suggest, maybe if I spent more time really trying to wrap my head around how astronomically large a number can get, my intuition would line up better with math.
Here's a question on the incomparability of excruciating pain. Back to the "moral judgements for oneself" theme... How many people would agree to get branded with a hot branding iron in exchange for a billion dollars? I'll bet at least a few would agree.
Temporary pain without any meaningful lasting injuries? I do worse long-term damage than that at my actual job just in neck and wrist damage and not being sufficiently active (on a good day I get 1-2hrs, but that doesn't leave much time for other things), and I'm definitely not getting paid a billion for it.
This article was especially helpful:
https://www.painscience.com/tutorials/trigger-points.php
I suspect the damage you're concerned about is reversible, if you're sufficiently persistent with research and experimentation. That's been my experience with chronic pain.
No they're not! You have made a claim of the form "these things are the same thing"—but it only seems that way if you can't think of a single plausible alternative. Here's one:
* Humans are motivated by two competing drives. The first drive we can call "fear", which aims to avoid suffering, either personally or in people you care about or identify with. This derives from our natural empathic instinct, but is can be extended by a socially-construction of group identity. So, the shrimp argument is saying "your avoiding-suffering instinct can and should be applied to crustaceans too", which is contrary to how most people feel. Fear also includes "fear of ostracization", this being equivalent to death in a prehistoric context.
* The second drive is "thriving" or "growing" or "becoming yourself", and leads you to glimpse the person you could be, things you could do, identities you could hold, etc, and to strive to transform yourself into those things. The teenager ultimately wants the PS5 because they've identified with it in some way—they see it as a way to express themself. Their "utilitarian" actions in this context are instrumental, not moral—towards the attainment of what-they-want. I think, in this simple model, I'd also broader this drive to include "eating meat"—you don't do this for the animal or to abate suffering, you do it because you want to: your body's hungry, you desire the pleasure of satiation, and you act to realize that desire.
* The two drives are not the same, and in the case of eating meat are directly opposed. (You could perhaps devise a way to see either as, ultimately, an expression of the other.) Human nature, then, basically undertakes the "thriving" drive except when there's a threat of suffering, in which case we switch gears to "fear" until it's handled.
* Much utilitarian discourse seems to exist in a universe where the apparently-selfish "thriving" drive doesn't exist, or has been moralized out of existence—because it doesn't look good on paper. But, however it sounds, it in fact exists, and you will find that almost all living humans will defend their right to express themselves, sometimes to the death. This is at some level the essence of life, and the rejection of it leads many people to view EA-type utilitarianism as antithetical to life itself.
* One reason for this is that "fear-mode thinking" is cognitively expensive, and while people will maintain it for a while, they will eventually balk against it, no matter how reasonable it seems (probably this explains the last decade of American politics).
There was a time when my good deeds were more motivated by fear. I found that fear wasn't a good motivator. This has become the consensus view in the EA community. EAs generally think it's important to avoid burnout. After reworking my motivations, doing good now feels like a way to thrive, not a way to avoid fear. The part of me which was afraid feels good about this development, because my new motivational structure is more sustainable.
If you're not motivated to alleviate suffering in other beings, it is what it is. I'm not going to insult you or anything. However, if I notice you insulting others over moral trifles, I might privately think to myself that you are being hyperbolic. When I put on my EA-type utilitarian hat on, almost all internet fighting seems to lack perspective.
I support your ability to express yourself. (I'm a little skeptical that's the main driver of the typical PS5 purchase, but that's beside the point.) I want you to thrive! I consume meat, so I can't condemn you for consuming meat. I did try going vegan for a bit, but a vegan diet was causing fatigue. I now make a mild effort to eat a low-suffering diet. I also donate to https://gfi.org/ to support research into alternative meats. (I think it's plausible that the utilitarian impact of my diet+donations is net positive, since the invention of viable alternative meats could have such a large impact.) And whenever I get the chance, I rant about the state of vegan nutrition online, in the hope that vegans will notice my rants and improve things.
(Note that I'm not a member of the EA community, but I agree with aspects of the philosophy. My issues with the community can go in another thread.)
(I appreciate you writing this reply. Specifically, I find myself wondering if utilitarian advocacy would be more effective if what I just wrote, about the value of rejecting fear-style motivation, was made explicit from the beginning. It could make utilitarianism both more appealing and more sustainable.)
https://forum.effectivealtruism.org/posts/dbw2mgSGSAKB45fAk/...
Really I want to see vegans do a comprehensive investigation of every last nutrient that's disproportionately found in animal products, including random stuff like beta-alanine, creatine, choline, etc., and take a "better safe than sorry" approach of inventing a veggie burger that contains all that stuff in abundance, and is palatable.
I suspect you could make a lot of money by inventing such a burger. Vegans are currently fixated on improving taste, and they seem to have a bit of a blind spot around nutrition. I expect a veggie burger which mysteriously makes vegans feel good, gives them energy, and makes them feel like they should eat more of it will tend to sell well.
I often eat complemented processed food (~1serve/day) primary to transform lazy-meals (fries, pizza, sandwiches…) with a lazy-healthy meal. When I stopped flesh those became even more useful.
I recommend the bars (no preparation) and the "pots" (salty meals). Shakers are good and cheap. All seems very balanced and complete nutritiously.
http://jimmyjoy.com/ Not affiliated in any way, just an happy customer.
* No weird yeast or fungal ingredients -- living or dead. I get fatigue after consuming that stuff. My body doesn't respond well to it. I suppose the most ordinary sort of mushrooms are probably OK.
* No eggs, they're pretty suffering-intensive: https://sandhoefner.github.io/animal_suffering_calculator
* Minimally processed ingredients preferred.
* Infuse with iodized salt, healthy fats, and spices for flavor. Overall good taste is much more important than replicating meat flavor/mouthfeel. I assume if you iterate on the spices enough, you can come up with something that tastes incredible.
I'm imagining a heavily fortified, compressed bean patty of some kind. I suppose with beans you'd have to account for phytates interfering with nutrient absorption, though.
There's an article here which discusses and embeds the video: https://barbend.com/clarence-kennedy-vegan-diet/
Uhh, that's totally unintuitive and surely almost all people would disagree, right?
If not in words, people disagree in actions. Even within effective altruism there are a lot of people only giving to human centred causes.
The page gives 3% to shrimp because their lifespan is 3% that of humans. It’s a terrible avenue for this estimate. By the same estimate, giant tortoises are less ethical to kill than humans. The heavens alone can judge you for the war crimes you’d be committing by killing a Turritopsis dohrnii.
Number of neurons is the least-bad objective measurement in my eyes. Arthropods famously have very few neurons, <100k compared to 86b in humans. That’s a 1:1000000 neuron ratio, which feels like a more appropriate ratio for suffering than a lifespan-based ratio, though both are terrible.
> Capacity for welfare = welfare range × lifespan. An individual’s welfare range is the difference between the best and worst welfare states the individual can realize.
> we rely on indirect measures even in humans: behavior, physiological changes, and verbal reports. We can observe behavior and physiological changes in nonhumans, but most of them aren’t verbal. So, we have to rely on other indirect proxies, piecing together an understanding from animals’ cognitive and affective traits or capabilities.
First time I see this "warfare range" notion and it seems quite clever to me.
Also the original article says 3.1% is the median while the mean is 19%. I guess that may be caused by individuals havûg différents experiences each other’s.
But this blog post uses a little BS math (.3 seconds IS shorter than 20 minutes! By an order of magnitude! Take my money!)
and some hand wavey citations (Did you know shrimp MIGHT be conscious based on a very loose definition of consciousness? Now you too are very smart! You can talk about this with your sort-of friends (coworkers) from the job where you spend 80 hours a week now!)
to convince some people that this is indeed an important and worthy thing. Because people who can be talked into this don't really interact with the real world, for the most part. So they don't know that lots of actual people need actual help that doesn't involve them dying anyway and being eaten en-masse afterwards.
Please read the cited Rethink Priorities research: https://rethinkpriorities.org/research-area/welfare-range-es...
Notably the FAQ and responses.
> The foolishness of that comment is so deep, I can only ascribe it to higher education. You have to have gone to college to say something that stupid.
The entire effort to quantify morality rests on the shakiest of foundations but makes confident claims about its own validity based on layers and layers of mathematical obfuscation and abstraction.
I agree that lots of people in the real world need help. Helping them is good. But so is averting enormous amounts of pain and suffering. In expectation, even given a low credence in shrimp sentience, giving averts huge amounts of pain and suffering, which is quite good.
I guess what strikes me the most odd is that not eating shrimp is never suggested as an alternative. It starts from the premise that, well, we're going to eat shrimp anyway, so the least we could do is give them a painless death first. If you follow this logic to its extremes, you get things like, "well, it's expensive to actually feed these starving children, but for just pennies a day you can make sure they at least die painlessly".
If you're considering how to best spend your money, it doesn't matter that not eating shrimp would be an even better solution than preventing pain when they are killed. It only matters what the most effective way of spending your money is.
Should you push arguments that seem ridiculously unacceptable to the vast majority of people, thereby reducing the weight of more acceptable arguments you could possibly make?
I know the person making this argument isn't necessarily aligned with deontology. Maybe that was your original point.
I think this is a tough call in general. Current morality would be considered "ridiculously unacceptable" by 1800s standards, but I see it as a good thing that we've moved away from 1800s morality. I'm glad people were willing to challenge the 1800s status quo. At the same time, my sense is that the environmentalists who are ruining art in museums are probably challenging the status quo in a way that's unproductive.
To some degree, I suspect the rationalist / EA crowd has decided that weird contrarians tend to be the people who have the greatest impact in the long run, so it's OK to filter for those people.
The reason nobody actually does this, is that EA is a belief system adopted by (at least) comfortably well-off Silicon Valley people to make themselves feel better about their effect on society. If there is a 0.00001% chance they can prevent AI MegaHitler, everything they do to make more money is justified.
Neither of these points are well supported by the article. Nor are they well supported by the copious links scattered through the blog post.
For example, "they worked with Tesco to get an extra 1.6 billion shrimp stunned before slaughter every year" links to a summary about the charity NOT to any source for 1.6 billion shrimp saved.
It's in the exact webpage linked there. You just didn't scroll down enough.
> Tesco and Sainsbury’s published shrimp welfare commitments, citing collaboration with SWP (among others), and signed 9 further memoranda of understanding with producers, in total committing to stunning a further ~1.6B shrimps per annum.
https://animalcharityevaluators.org/charity-review/shrimp-we...
https://www.globenewswire.com/en/news-release/2024/08/17/293...
It's not a primary source. It's a one sentence summary of a secondary source. This[1] is the primary source of the Tesco commitment.
[1] https://www.tescoplc.com/sustainability/documents/policies/t...
> Tesco and Sainsbury’s published shrimp welfare commitments, citing collaboration with SWP (among others), and signed 9 further memoranda of understanding with producers, in total committing to stunning a further ~1.6B shrimps per annum.
It is a secondary source. It does not present firsthand information. It describes commitments made by Tesco, Sainsbury's, and others.
Setting this aside, the point I made is simple. This article argues for a radical change in morality; folks generally view a human life as worth much much more than 32 shrimp lives.
A well-written radical argument should understand how it differs from mainstream thought and focus on the premise(s) that underlies this difference. As I am unimpressed with the premises, I find the article unconvincing.
Indeed, people will resist being "tricked" into this framework: debating on these terms will feel like having their morals twisted into justifying things they don't believe in. And although they may not have the patience or rhetorical skill to put into words exactly why they resist it, their intuitions won't lead them astray, and they'll react according to their true-but-hard-to-verbalize beliefs (usually by gradually getting frustrated and angry with you).
A person who believes in rationalizing everything will then think that someone who resists this kind of argument is just dumb, or irrational, or stubborn, or actually-evil, to see that they are wrong. But it seems to me that the very idea that you can rationalize morality, that you can compute the right thing to do at a personal-ethics level, is itself a moral belief, which those people simply do not agree with, and their resistance is in accordance with that: you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic. No wonder they resist! People do not release control over their moral beliefs lightly. Rather I think it's the people who are very insecure in their own beliefs who are susceptible to giving them up to someone who runs rhetorical circles around them.
I've come to think that a lot of 21st century discord (c.f. American political polarization) is due to this basic conflict. People who believe in rationalizing everything think they can't be wrong because the only way to evaluate anything is rationally--a lens through which, of course rationality looks better than anything else. Meanwhile everyone who trusts in their own moral intuitions feels tricked and betrayed and exploited and sold out when it happens. Sure, they can't always find the words to defend themselves. But it's the rationalizers who are in the wrong: pressuring someone into changing their mind is not okay; it's a basic act of disrespect. Getting someone on your side for real means appealing to their moral intuition, not making them doubt theirs until they give up and reluctantly agree with yours. Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance. At that point they may well be "wrong", but there's no convincing them otherwise: their moral goal has been replaced with a singular need to get to make their own decisions instead of being subjugated by yours.
Anyway.
IMO to justify animal welfare utilitarianism to people who don't care about it at all, you need to take one of two stances:
1. We (the animal-empathizers) live in a society with you, and we care a lot about this, but you don't. But we're in community with each other, so we ought to support each other's causes even if they're not personally relevant to us. So how about you support what we care about and you support what we care about, so everyone benefits? In this case it's very cheap to help.
2. We all live in a society together which should, by now, have largely solved for our basic needs (except for our basic incompetence at it, which, yeah, we need to keep working on). The basic job of morality is to guarantee the safety of everyone in our community. As we start checking off basic needs at the local scale we naturally start expanding our definition of "community" to more and more beings that we can empathize with: other nations and peoples, the natural world around us, people in the far future who suffer from our carelessness, pets, and then, yes, and animals that we use for food. Even though we're still working on the "nearby" hard stuff, like protecting our local ecosystems, we can also start with the low-hanging-fruit on the far-away stuff, including alleviating the needless suffering of shrimp. Long-term we hope to live in harmony with everything on earth in a way that has us all looking out for each other, and this is a small step towards that.
"(suffering per death) * (discount rate for shrimp being 3% of a human) * (dollar to alleviate) = best charity" just doesn't work at all. I notice that the natural human moral intuition (the non-rational version) is necessarily local: it's focused on protecting whatever you regard as your community. So to get someone to extend it to far-away less-sentient creatures, you have to convince the person to change their definition of the "community"--and I think that's what happens naturally when they feel like their local community is safe enough that they can start extending protection at a wider radius.
To me, the point of this argument (along with similar ones) is to expose these deeper asymmetries that exist in most people's moral systems - to make people question their moral beliefs instead of accepting their instinct. Not to say "You're all wrong, terrible people for not donating your money to this shrimp charity which I have calculated to be a moral imperative".
Yes every genius 20 year old wants to break down other peoples' moral beliefs, because it's the most validating feeling in the world to change someone's mind. From the other side, this looks like, quoting OP:
> you'd be trying to convince them to replace their moral beliefs with yours in order to win an argument by tricking them with logic.
And feels like:
> pressuring someone into changing their mind is not okay; it's a basic act of disrespect.
And doesn't work, instead:
> Anyway it's a temporary and false victory: theirs will re-emerge years later, twisted and deformed from years of imprisonment, and often set on vengeance.
I may be putting my hands up in surrender, as a 20 year old (decidedly not genius though). But I'm instead defending this belief, not trying to convince others. Also, I don't think it's the worst thing in the world to have people question their preconceived moral notions. I've taken ethics classes in college and I personally loved having them challenged.
Were morality a logical system, then yes, finding apparent contradictions would seem to invalidate it. But somehow that's backwards. At some level moral intuitions can't be wrong: they're moral intuitions, not logic. They obey different rules; they operate at the level of emotion, safety, and power. A person basically cannot be convinced with logic to no longer care about the safety of someone/something that they care about the safety of. Even if they submit to an argument of that form, they're doing it because they're conceding power to the arguer, not because they've changed their mind (although they may actually say that they changed their opinion as part of their concession).
This isn't cut-and-dry; I think I have seen people genuinely change their moral stances on something from a logical argument. But I suspect that it's incredibly rare, and when it happens it feels genuinely surprising and bizarre. Most of the time when it seems like it's happening, there's actually something else going on. A common one is a person changing their professed moral stance because they realize they win some social cachet for doing so. But that's a switch at the level of power, not morality.
Anyway it's easy to claim to hold a moral stance when it takes very little investment to do so. To identify a person's actual moral opinions you have to see how they act when pressure is put on them (for instance, do they resist someone trying to change their mind on an issue like the one in the OP?). People are incredibly good at extrapolating from a moral claim to its moral implications that affect them (if you claim that we should prioritize saving the lives of shrimp, what else does that argument justify? And what things that I care about does that argument then invalidate? Can I still justify spending money on the things I care about in a world where I'm supposed to spend it on saving animals?), and they will treat an argument as a threat if it seems to imply things that would upset their personal morality.
The sorts of arguments that do regularly change a person's opinion on the level of moral intuitions are of the form:
* information that you didn't notice how you were hurting/failing to help someone
* or, information that you thought you were helping or avoiding hurting someone, but you were wrong.
* corrective actions like shame from someone they respect or depend on ("you hurt this person and you're wrong to not care")
* other one-on-one emotional actions, like a person genuinely apologizing, or acting selfless towards you, or asserting a boundary
(Granted, this stance seems to invalidate the entire subject of ethics. And it kinda does: what I'm describing is phenomological, not ethical; I'm claiming that this is how people actually work, even if you would like them to follow ethics. It seems like ethics is what you get when you try to extend ground-level moralities to an institutional level. when you abstract morality from individuals to collectives, you have to distill it into actual rules that obey some internal logic, and that's where ethics comes in.)
Trees and bushes and vegetables might experience extreme agony too when dying.
https://rethinkpriorities.org/research-area/welfare-range-es...
Also, why not endeavor to replace meat grown by slaughtering animals with other alternatives? The optimization of such would reduce the energy, costs, biothreats, and suffering that eating other living beings creates.
Utilitarians tend to be very interested in this, too. I've been giving to this group: https://gfi.org/
* they triumphally declare victory--ethics is solved! We can finally Do The Most Good!
* or, it's so ridiculous that it occurs to them that they're missing something--must have taken a wrong turn somewhere earlier on.
By my tone you can probably tell I take the latter position, roughly because "suffering", or "moral value", is not rightly seen as measurable, calculatable, or commensurable, even between humans. It's occasionally a useful view for institutions to hold, but imo the one for a human.
If your goal is "maximize total happiness", then engineering blisshrimp is obviously the winning play. If your goal is "minimize total suffering", than the play is to engineer something that 1. experiences no suffering, 2. is delicious, and 3. outcompetes existing shrimp so we don't have to worry about their suffering anymore.
Ideally we'd engineer something that is in a state of perpetual bliss and wants to be eaten, not unlike the cows in Restaurant at the End of the Universe.
Eh, only if you're minimizing suffering per living being. Not total suffering. Having more happy creatures doesn't cancel out the sad ones. But I see what you mean.
According to this guy it does.
Kudos to you for making the connection.
Suffering is an expresion / concept we humans have because we called a certain state like this. Suffering is something a organism presents if that organism can't survive or struggles with survival.
Now i'm a human and my empathy is a lot stronger for humans, a lot, than for shrimps.
Btw. i do believe if we would really care and make sure fewllow humans would not need to suffer (they need to suffer because of capitalsm), a lot of other suffering would stop too.
We would be able to actually think about shrimps and other animals.
How do we know this isn't just fiction?
> Shrimp are a test of our empathy. Shrimp don’t look normal, caring about them isn’t popular, but basic ethical principles entail that they matter.
I think we'll be looking back in the not-so-far future with disgust about how we treated animals.
[0] RTFA
Since they don’t have experience, they can’t suffer, in the morally relevant sense for this argument.
In particular, one portion features an autonomous bioreactor which produces enormous clouds of "yayflies"; mayflies whose nervous systems have been engineered to experience constant, maximal pleasure. The system's designer asserts that, given the sheer volume of yayflies produced, they have done more than anyone in history to increase the absolute quantity of happiness in the universe.
[1] https://forum.effectivealtruism.org/posts/EbQysXxofbSqkbAiT/...
>The way it works is simple and common sense
Claiming "common sense" in any argument is red flag number 1 that you don't actually have a self supporting argument. Common sense doesn't actually exist, and anyone leaning on it is just trying to compel you through embarrassment to support their cause without argument. There's a reason proving 1+1=2 takes a hundred pages.
Randomly inserting numbers that "seem" right so that you can pretend to be a rigorous field is cargo cultism and pseudoscience. Numbers without data and justification is not rigor.