01 Consequentialism
Some claim a moral theory should not be classified as consequentialist unless it is agent-neutral, meaning
Consequentialism can technically encapsulate weird theories such as âincreasing the number of goats in Texasâ.
Criticized in that almost any moral theory can be represented as a version of consequentialism.
01.01 Quantitative vs Qualitative Hedonism
Some contemporaries of Bentham and Mill argued that hedonism lowers the value of human life to the level of animals, because it implies that, as Bentham said, an unsophisticated game (such as push-pin) is as good as highly intellectual poetry if the game creates as much pleasure (Bentham 1843). Quantitative hedonists sometimes respond that great poetry almost always creates more pleasure than trivial games (or sex and drugs and rock-and-roll), because the pleasures of poetry are more certain (or probable), durable (or lasting), fecund (likely to lead to other pleasures), pure (unlikely to lead to pains), and so on.
Mill used a different strategy to avoid calling push-pin as good as poetry. He distinguished higher and lower qualities of pleasures according to the preferences of people who have experienced both kinds (Mill 1861, 56; compare Plato 1993 and Hutcheson 1755, 421â23). This qualitative hedonism has been subjected to much criticism, including charges that it is incoherent and does not count as hedonism (Moore 1903, 80â81; cf. Feldman 1997, 106â24).
â SEP
01.02 Counter Arguments Against Hedonism (Experience Machine)
These points against hedonism are often supplemented with the story of the experience machine found in Nozick 1974 (42â45; cf. De Brigard 2010) and the movie, The Matrix. People on this machine believe they are spending time with their friends, winning Olympic gold medals and Nobel prizes, having sex with their favorite lovers, or doing whatever gives them the greatest balance of pleasure over pain. Although they have no real friends or lovers and actually accomplish nothing, people on the experience machine get just as much pleasure as if their beliefs were true. Moreover, they feel no (or little) pain. Assuming that the machine is reliable, it would seem irrational not to hook oneself up to this machine if pleasure and pain were all that mattered, as hedonists claim. Since it does not seem irrational to refuse to hook oneself up to this machine, hedonism seems inadequate. The reason is that hedonism overlooks the value of real friendship, knowledge, freedom, and achievements, all of which are lacking for deluded people on the experience machine.
â SEP
01.03 Preference Utilitarianism and the Experience Machine
A related position rests on the claim that what is good is desire satisfaction or the fulfillment of preferences; and what is bad is the frustration of desires or preferences. What is desired or preferred is usually not a sensation but is, rather, a state of affairs, such as having a friend or accomplishing a goal. If a person desires or prefers to have true friends and true accomplishments and not to be deluded, then hooking this person up to the experience machine need not maximize desire satisfaction. Utilitarians who adopt this theory of value can then claim that an agent morally ought to do an act if and only if that act maximizes desire satisfaction or preference fulfillment (that is, the degree to which the act achieves whatever is desired or preferred). What maximizes desire satisfaction or preference fulfillment need not maximize sensations of pleasure when what is desired or preferred is not a sensation of pleasure. This position is usually described as preference utilitarianism.
â SEP
Preference Utilitarianism, you might want to drink a liquid from a cup because you think it is beer but it is really a strong acid. Preference utilitarians talk about limited preferences for good things. (Can make substantive assumptions about which preferences are goods though)
-
Ideal Utilitarianism = Utilitarianism with Pluralistic Theory of Value. Ideal utilitarianism takes into account values of beauty and truth/knowledge in addition to pleasure. (Possibly other values as well)
Welfarist â Values concern individual welfare
Perfectionism (non-welfarist) â Certain states make a personâs life good without increasing the personâs welfare. (sometimes not considered utilitarian)
Or Maximization of utilitarianism of fulfilling certain moral rights.
Or values on distribution/fairness
-
Holistic Consequentialism/World Utilitarianism
This basically feels like sophisticated utilitarianism over naive utilitarianism. Just âdoes this result in a better worldâ by utilitarian standards. I might not interpret this right.
opponents often charge that classical utilitarians cannot explain our obligations to keep promises and not to lie when no pain is caused or pleasure is lost â SEP
-
Negative Utilitarianism = Act only wrong iff consequences result in more pain.
- Ex: Government could supply free contraceptives to prevent overcrowding leading to hunger, disease, and pain. Classical Utilitarianism might say not to provide them. Negative utilitarians say no.
- But also implies painlessly killing everyone would also not be wrong because it prevents pain.
- Average utilitarianism is kind of an intermediate between classical and negative. (but also still has a problem, could be better off killing the worst off)
Noted that âgoodâ is an adjective and must be used with qualification. âA is a good Xâ like âA is a good poisonâ. But for utilitarians you can just fill X in with âthe worldâ
02 Separating normative ethical theory from decision procedures
Bentham wrote, âIt is not to be expected that this process [his hedonic calculus] should be strictly pursued previously to every moral judgment.â (1789, Chap. IV, Sec. VI) Mill agreed, âit is a misapprehension of the utilitarian mode of thought to conceive it as implying that people should fix their minds upon so wide a generality as the world, or society at large.â (1861, Chap. II, Par. 19) Sidgwick added, âIt is not necessary that the end which gives the criterion of rightness should always be the end at which we consciously aim.â (1907, 413)
[âŠ]
Furthermore, a utilitarian criterion of right implies that it would not be morally right to use the principle of utility as a decision procedure in cases where it would not maximize utility to try to calculate utilities before acting. Utilitarians regularly argue that most people in most circumstances ought not to try to calculate utilities, because they are too likely to make serious miscalculations that will lead them to perform actions that reduce utility. It is even possible to hold that most agents usually ought to follow their moral intuitions, because these intuitions evolved to lead us to perform acts that maximize utility, at least in likely circumstances (Hare 1981, 46â47). Some utilitarians (Sidgwick 1907, 489â90) suggest that a utilitarian decision procedure may be adopted as an esoteric morality by an elite group that is better at calculating utilities, but utilitarians can, instead, hold that nobody should use the principle of utility as a decision procedure.
[âŠ]
Others object that this move takes the force out of consequentialism, because it leads agents to ignore consequentialism when they make real decisions. However, a criterion of the right can be useful at a higher level by helping us choose among available decision procedures and refine our decision procedures as circumstances change and we gain more experience and knowledge. Hence, most consequentialists do not mind giving up consequentialism as a direct decision procedure as long as consequences remain the criterion of rightness (but see Chappell 2001).
â SEP
03 Uncertainty + Utilitarianism â Moral skepticism
If overall utility is the criterion of moral rightness, then it might seem that nobody could know what is morally right. If so, classical utilitarianism leads to moral skepticism. However, utilitarians insist that we can have strong reasons to believe that certain acts reduce utility, even if we have not yet inspected or predicted every consequence of those acts. For example, in normal circumstances, if someone were to torture and kill his children, it is possible that this would maximize utility, but that is very unlikely. Maybe they would have grown up to be mass murders, but it is at least as likely that they would grow up to cure serious diseases or do other great things, and it is much more likely that they would have led normally happy (or at least not destructive) lives. So observers as well as agents have adequate reasons to believe that such acts are morally wrong, according to act utilitarianism. In many other cases, it will still be hard to tell whether an act will maximize utility, but that shows only that there are severe limits to our knowledge of what is morally right. That should be neither surprising nor problematic for utilitarians.
â SEP
04 Objective vs Subjective Consequentialism
Actual vs Expected vs Foreseen vs Foreseeable Consequentialism/Utilitarianism. Separating blaming a person from the morality of an action.
If utilitarians want their theory to allow more moral knowledge, they can make a different kind of move by turning from actual consequences to expected or expectable consequences. Suppose that Alice finds a runaway teenager who asks for money to get home. Alice wants to help and reasonably believes that buying a bus ticket home for this runaway will help, so she buys a bus ticket and puts the runaway on the bus. Unfortunately, the bus is involved in a freak accident, and the runaway is killed. If actual consequences are what determine moral wrongness, then it was morally wrong for Alice to buy the bus ticket for this runaway. Opponents claim that this result is absurd enough to refute classic utilitarianism.
Some utilitarians bite the bullet and say that Aliceâs act was morally wrong, but it was blameless wrongdoing, because her motives were good, and she was not responsible, given that she could not have foreseen that her act would cause harm. Since this theory makes actual consequences determine moral rightness, it can be called actual consequentialism.
Other responses claim that moral rightness depends on foreseen, foreseeable, intended, or likely consequences, rather than actual ones. Imagine that Bob does not in fact foresee a bad consequence that would make his act wrong if he did foresee it, but that Bob could easily have foreseen this bad consequence if he had been paying attention. Maybe he does not notice the rot on the hamburger he feeds to his kids which makes them sick. If foreseen consequences are what matter, then Bobâs act is not morally wrong. If foreseeable consequences are what matter, then Bobâs act is morally wrong, because the bad consequences were foreseeable. Now consider Bobâs wife, Carol, who notices that the meat is rotten but does not want to have to buy more, so she feeds it to her children anyway, hoping that it will not make them sick; but it does. Carolâs act is morally wrong if foreseen or foreseeable consequences are what matter, but not if what matter are intended consequences, because she does not intend to make her children sick. Finally, consider Bob and Carolâs son Don, who does not know enough about food to be able to know that eating rotten meat can make people sick. If Don feeds the rotten meat to his little sister, and it makes her sick, then the bad consequences are not intended, foreseen, or even foreseeable by Don, but those bad results are still objectively likely or probable, unlike the case of Alice. Some philosophers deny that probability can be fully objective, but at least the consequences here are foreseeable by others who are more informed than Don can be at the time. For Don to feed the rotten meat to his sister is, therefore, morally wrong if likely consequences are what matter, but not morally wrong if what matter are foreseen or foreseeable or intended consequences.
Consequentialist moral theories that focus on actual or objectively probable consequences are often described as objective consequentialism (Railton 1984). In contrast, consequentialist moral theories that focus on intended or foreseen consequences are usually described as subjective consequentialism. Consequentialist moral theories that focus on reasonably foreseeable consequences are then not subjective insofar as they do not depend on anything inside the actual subjectâs mind, but they are subjective insofar as they do depend on which consequences this particular subject would foresee if he or she were better informed or more rational.
â SEP
From this quote, I donât know what Iâd say. I think actual consequentialism might be bull. I think that acquiring knowledge to make good decisions is a part of the moral imperative. Deciding to acquire knowledge is itself is an action though. I donât know how to balance this. I think the measure of an action should be Subjective Expected Utility. I feel like this topic comes down to âshould we blame a person?â and the terms we use inform this. The consequence of calling it a âblameless bad actionâ hurts the person who committed the action. While just saying âit was an expected good actionâ doesnât hurt the person. The decision procedure does not change in either situation. Iâm using utilitarianism to justify the words we use lol.
05 Proximate Consequentialism
One final solution to these epistemological problems deploys the legal notion of proximate cause. If consequentialists define consequences in terms of what is caused (unlike Sosa 1993), then which future events count as consequences is affected by which notion of causation is used to define consequences. Suppose I give a set of steak knives to a friend. Unforeseeably, when she opens my present, the decorative pattern on the knives somehow reminds her of something horrible that her husband did. This memory makes her so angry that she voluntarily stabs and kills him with one of the knives. She would not have killed her husband if I had given her spoons instead of knives. Did my decision or my act of giving her knives cause her husbandâs death? Most people (and the law) would say that the cause was her act, not mine. Why? One explanation is that her voluntary act intervened in the causal chain between my act and her husbandâs death. Moreover, even if she did not voluntarily kill him, but instead she slipped and fell on the knives, thereby killing herself, my gift would still not be a cause of her death, because the coincidence of her falling intervened between my act and her death. The point is that, when voluntary acts and coincidences intervene in certain causal chains, then the results are not seen as caused by the acts further back in the chain of necessary conditions (Hart and HonorĂ© 1985). Now, if we assume that an act must be such a proximate cause of a harm in order for that harm to be a consequence of that act, then consequentialists can claim that the moral rightness of that act is determined only by such proximate consequences. This position, which might be called proximate consequentialism, makes it much easier for agents and observers to justify moral judgments of acts because it obviates the need to predict non-proximate consequences in distant times and places. Hence, this move is worth considering, even though it has never been developed as far as I know and deviates far from traditional consequentialism, which counts not only proximate consequences but all upshots â that is, everything for which the act is a causally necessary condition.
â SEP
My intuition is that proximal cause does not matter. If you give an obviously bad person a cooking knife set as a gift, knowing they might well stab another person, I think this a bad action.
06 Utilitarianism Overlooking Justice and Rights
Another problem for utilitarianism is that it seems to overlook justice and rights. One common illustration is called Transplant. Imagine that each of five patients in a hospital will die without an organ transplant. The patient in Room 1 needs a heart, the patient in Room 2 needs a liver, the patient in Room 3 needs a kidney, and so on. The person in Room 6 is in the hospital for routine tests. Luckily (for them, not for him!), his tissue is compatible with the other five patients, and a specialist is available to transplant his organs into the other five. This operation would save all five of their lives, while killing the âdonorâ. There is no other way to save any of the other five patients (Foot 1966, Thomson 1976; compare related cases in Carritt 1947 and McCloskey 1965).
We need to add that the organ recipients will emerge healthy, the source of the organs will remain secret, the doctor wonât be caught or punished for cutting up the âdonorâ, and the doctor knows all of this to a high degree of probability (despite the fact that many others will help in the operation). Still, with the right details filled in (no matter how unrealistic), it looks as if cutting up the âdonorâ will maximize utility, since five lives have more utility than one life (assuming that the five lives do not contribute too much to overpopulation). If so, then classical utilitarianism implies that it would not be morally wrong for the doctor to perform the transplant and even that it would be morally wrong for the doctor not to perform the transplant. Most people find this result abominable. They take this example to show how bad it can be when utilitarians overlook individual rights, such as the unwilling donorâs right to life.
â SEP
Might lead to very bad societal effects, thus bad. But no, donât care about the line of taking a life. Many will modify utilitarianism to fix these issues, for example saying a killing is worse than a death. But when we say a murderer stabbed these 5 people, then it falls apart.
07 Agent-relative vs agent-neutral Consequentialism
(Again, I think this is a blame game because it doesnât change the decision procedure)
This kind of case leads some consequentialists to introduce agent-relativity into their theory of value (Sen 1982, Broome 1991, Portmore 2001, 2003, 2011). To apply a consequentialist moral theory, we need to compare the world with the transplant to the world without the transplant. If this comparative evaluation must be agent-neutral, then, if an observer judges that the world with the transplant is better, the agent must make the same judgment, or else one of them is mistaken. However, if such evaluations can be agent-relative, then it could be legitimate for an observer to judge that the world with the transplant is better (since it contains fewer killings by anyone), while it is also legitimate for the doctor as agent to judge that the world with the transplant is worse (because it includes a killing by him). In other cases, such as competitions, it might maximize the good from an agentâs perspective to do an act, while maximizing the good from an observerâs perspective to stop the agent from doing that very act. If such agent-relative value makes sense, then it can be built into consequentialism to produce the claim that an act is morally wrong if and only if the actâs consequences include less overall value from the perspective of the agent. This agent-relative consequentialism, plus the claim that the world with the transplant is worse from the perspective of the doctor, could justify the doctorâs judgment that it would be morally wrong for him to perform the transplant. A key move here is to adopt the agentâs perspective in judging the agentâs act. Agent-neutral consequentialists judge all acts from the observerâs perspective, so they would judge the doctorâs act to be wrong, since the world with the transplant is better from an observerâs perspective. In contrast, an agent-relative approach requires observers to adopt the doctorâs perspective in judging whether it would be morally wrong for the doctor to perform the transplant. This kind of agent-relative consequentialism is then supposed to capture commonsense moral intuitions in such cases (Portmore 2011).
â SEP
It feels like this is just saying âan action is good/bad iff the agent considers the action good/badâ. Imo you should blame the person for the action if it is worth it, otherwise donât. Iâm agent-neutral I guess if I understand this properly. Agent relative just sounds like lazy consequentialism. AHHH I probably donât understand, take another look at this.
More interesting thought experiment, donât know how it relates to agent-relative/neutral tho
Imagine that the doctor herself wounded the five people who need organs. If the doctor does not save their lives, then she will have killed them herself. In this case, even if the doctor can disvalue killings by herself more than killings by other people, the world still seems better from her own perspective if she performs the transplant. Critics will object that it is, nonetheless, morally wrong for the doctor to perform the transplant. â SEP
Indirect consequentialism
- Rule consequentialism
- Obedience rule consequentialism â What if everyone obeyed a rule or violated a rule (but what about âhave some childrenâ, everyone dies out if you decide not to have children. Clearly not everyone has to do it).
- Acceptance rule consequentialism â What if everyone were PERMITTED to do that.
- Public acceptance rule consequentialism â action is wrong iff it violates a publicly known rule whose acceptance maximizes good. (ex: if people broadly knew doctors were permitted to harvest organs, they might not go.)
08 Criticisms of rule utilitarianism
Why should mistakes by other doctors in other cases make this doctorâs act morally wrong, when this doctor knows for sure that he is not mistaken in this case? Rule consequentialists can respond that we should not claim special rights or permissions that we are not willing to grant to every other person, and that it is arrogant to think we are less prone to mistakes than other people are. However, this doctor can reply that he is willing to give everyone the right to violate the usual rules in the rare cases when they do know for sure that violating those rules really maximizes utility. Anyway, even if rule utilitarianism accords with some common substantive moral intuitions, it still seems counterintuitive in other ways. This makes it worthwhile to consider how direct consequentialists can bring their views in line with common moral intuitions, and whether they need to do so.
â SEP
09 Utilitarianism is demanding
Another popular charge is that classic utilitarianism demands too much, because it requires us to do acts that are or should be moral options (neither obligatory nor forbidden). (Scheffler 1982) For example, imagine that my old shoes are serviceable but dirty, so I want a new pair of shoes that costs 100 to a charity that will use my money to save someone elseâs life. It would seem to maximize utility for me to give the $100 to the charity. If it is morally wrong to do anything other than what maximizes utility, then it is morally wrong for me to buy the shoes. But buying the shoes does not seem morally wrong. It might be morally better to give the money to charity, but such contributions seem supererogatory, that is, above and beyond the call of duty. Of course, there are many more cases like this. When I watch television, I always (or almost always) could do more good by helping others, but it does not seem morally wrong to watch television. When I choose to teach philosophy rather than working for CARE or the Peace Corps, my choice probably fails to maximize utility overall. If we were required to maximize utility, then we would have to make very different choices in many areas of our lives. The requirement to maximize utility, thus, strikes many people as too demanding because it interferes with the personal decisions that most of us feel should be left up to the individual.
â SEP
To fix utilitarian demandingness, you can redefine an act to be morally wrong only when it fails to maximize utility AND its agent should be punished for the failure (which is not always utility maximizing). (Mill) (I donât know, itâs possible.). But generally says ânot necessarily morally wrong to fail to maximize utility, doing this is an âoughtâ not a âmustââ Alternatively, you can apply proximate cause and say you donât have to donate because you were not responsible. (Offering aid to a drowning person is not always legally required)
- Satisficing consequentialism â Instead of maximizing utility, we try to create a bar for the amount of utility we ought to create. Possibly based?
- Progressive consequentialism â Morally ought to improve the world or make it better than it would if we did nothing (ie: leave it better than you found it) Possibly valid?
- Scalar consequentialism â Defines how good an act is on a scale of wrongness to rightness. (ex: Donating 1000 more right than $100. Based, but I feel like utilitarians do this anyways.
- Contrastivist consequentialism â Donât fully understand but I donât think it is terribly important
âIt remains controversial, however, whether any form of consequentialism can adequately incorporate common moral intuitions about friendship.â
10 Consequentialism may be the inference to best explain our moral intuitions
Consequentialism also might be supported by an inference to the best explanation of our moral intuitions. This argument might surprise those who think of consequentialism as counterintuitive, but in fact consequentialists can explain many moral intuitions that trouble deontological theories. Moderate deontologists, for example, often judge that it is morally wrong to kill one person to save five but not morally wrong to kill one person to save a million. They never specify the line between what is morally wrong and what is not morally wrong, and it is hard to imagine any non-arbitrary way for deontologists to justify a cutoff point. In contrast, consequentialists can simply say that the line belongs wherever the benefits most outweigh the costs, including any bad side effects (cf. Sinnott-Armstrong 2007). Similarly, when two promises conflict, it often seems clear which one we should keep, and that intuition can often be explained by the amount of harm that would be caused by breaking each promise. In contrast, deontologists are hard pressed to explain which promise is overriding if the reason to keep each promise is simply that it was made (Sinnott-Armstrong 2009). If consequentialists can better explain more common moral intuitions, then consequentialism might have more explanatory coherence overall, despite being counterintuitive in some cases.
â SEP
11 Utilitarianism might be supported from deductive arguments
Consequentialists also might be supported by deductive arguments from abstract moral intuitions. Sidgwick (1907, Book III, Chap. XIII) seemed to think that the principle of utility follows from certain very general self-evident principles, including universalizability (if an act ought to be done, then every other act that resembles it in all relevant respects also ought to be done), rationality (one ought to aim at the good generally rather than at any particular part of the good), and equality (âthe good of any one individual is of no more importance, from the point of view ⊠of the Universe, than the good of any otherâ).
Other consequentialists are more skeptical about moral intuitions, so they seek foundations outside morality, either in non-normative facts or in non-moral norms. Mill (1861) is infamous for his âproofâ of the principle of utility from empirical observations about what we desire (cf. Sayre-McCord 2001). In contrast, Hare (1963, 1981) tries to derive his version of utilitarianism from substantively neutral accounts of morality, of moral language, and of rationality (cf. Sinnott-Armstrong 2001). Similarly, Gewirth (1978) tries to derive his variant of consequentialism from metaphysical truths about actions.
â SEP