“Bernard Williams… concluded a lengthy attack on utilitarianism by remarking: ‘The day cannot be too far off in which we hear no more of it.’ It is now more than forty years since Williams made that comment, but we continue to hear plenty about utilitarianism.”
- Katarzyna de Lazari-Radek & Peter Singer1
Utilitarianism is a very controversial moral theory. Critics have raised many objections against it, and its defenders have responded with attempts to defuse these objections.
While our presentation focuses on utilitarianism, it is worth noting that many of the objections below could also be taken to challenge other forms of consequentialism (just as many of the arguments for utilitarianism also apply to these related views). This chapter explores objections to utilitarianism and closely related views in contrast to non-consequentialist approaches to ethics.
General Ways of Responding to Objections to Utilitarianism
Many objections rest on the idea that utilitarianism has counterintuitive implications. We can see these implications by considering concrete examples or thought experiments. For instance, in our article on the rights objection, we consider the Transplant case:
Transplant: Imagine a hypothetical scenario in which there are five patients, each of whom will soon die unless they receive an appropriate transplanted organ—a heart, two kidneys, a liver, and lungs. A healthy patient, Chuck, comes into the hospital for a routine check-up and the doctor finds that Chuck is a perfect match as a donor for all five patients. Should the doctor kill Chuck and use his organs to save the five others?
At first glance, it seems that utilitarianism has to answer the question affirmatively. It is better that five people survive than that just one person does. But killing Chuck seems morally monstrous to many. This apparent implication of utilitarianism is taken as an argument against its being the correct moral theory.
Proponents of utilitarianism can respond to its apparent counterintuitive implications in four general ways.
First, they can accommodate the intuition that seems to conflict with utilitarianism by arguing that a sophisticated application of utilitarian principles avoids the counterintuitive implication. To more reliably promote good outcomes, sophisticated utilitarians recognize their cognitive limitations and act in accordance with commonsense norms and heuristics, other than in exceptional circumstances. Insofar as an objector merely claims that we should embrace or oppose certain norms in practice, utilitarians can often straightforwardly agree.
Second, utilitarians can attempt to debunk the moral intuition invoked by a particular case by suggesting that it resulted from an unreliable process.2 If a debunking argument succeeds, the targeted moral intuition should not be given much weight in our moral reasoning.
Third, proponents of utilitarianism can attack the available alternatives—such as deontological or virtue ethical theories—to show that they have implications no less counterintuitive than those of utilitarianism.
A fourth strategy is to tolerate the intuition, which is sometimes called “biting the bullet”. This is to accept that utilitarianism has counterintuitive implications but to hold on to the theory because all-things-considered it is still more plausible than its rivals. The costs of accepting a counterintuitive implication, it is argued, can be outweighed by the force of the arguments in favor of utilitarianism. Moreover, our intuitions are often inconsistent and they are subject to change over time, which makes it impossible to find consistent and plausible principles that reflect all of them. So it requires good judgment to determine which intuitions and theoretical commitments are non-negotiable, and which we should be willing to compromise on in pursuit of “reflective equilibrium”, or the most plausible and coherent overall combination of moral verdicts and principles.
The Utilitarian’s Toolkit
There are further ideas that utilitarians may appeal to in developing the above general strategies.
Keep hypotheticals at a distance.3 The distinction between utilitarianism’s criterion of rightness and its recommended decision procedure is crucial to utilitarian attempts to accommodate common intuitions. Given that “rightness” is not the central concept of utilitarian theory, it may well make more sense to interpret common intuitions about “right” and “wrong” as addressing the question of what norms we (as fallible agents) should endorse in practice, rather than what ideally ought to be done (by an omniscient being) in principle. If justified, this interpretive move can drastically reduce the apparent conflict between utilitarianism and commonsense moral intuitions.
Accommodate nearby intuitions. More generally, utilitarians may seek to reduce their apparent conflict with commonsense by identifying nearby intuitions that they can accommodate. For example, if critics claim that a specific welfare-maximizing action is intuitively wrong, utilitarians may argue that our intuition here is better thought of tracking one of the following features:
- that it would be good to inculcate practical norms against actions of that type;
- that a person willing to perform such an action would likely have bad character, and be likely to cause greater harms on other occasions;
- that the action is reckless, or plausibly wrong in expectation, even if it happens to turn out for the best.4
Gobble up competing values. Critics sometimes allege that utilitarians don’t value obviously good things like rights, freedom, virtue, equality, and the natural environment. But while these things may be obviously good, it is less obvious that they are all non-instrumentally good. And utilitarians can certainly value them instrumentally. Moreover, utilitarians who accept an objective list theory of well-being may even be able to give non-instrumental consideration to goods (like freedom and beauty) that could plausibly be counted as welfare goods when part of a person’s life.
Stuff people into suitcases. Rival moral theories may be undermined by appeal to the veil of ignorance, and the related idea of ex ante Pareto—or what it would be in everyone’s best interests to agree to in advance (before learning about their particular position in life). Our intuitive reluctance to stick with the overall best policy can then start to seem biased. To make the point vivid, when faced with difficult trade-offs between conflicting interests, just imagine putting each affected person in a separate suitcase, and shuffling their positions.5 All would then rationally endorse the utilitarian-recommended action.
The Pluralist’s Dilemma (between extremism and arbitrariness). If you hold that there are non-utilitarian moral reasons (e.g. deontic constraints) that sometimes outweigh utilitarian reasons, this raises tricky questions about how the two kinds of reasons compare. If the non-utilitarian reason always trumps—no matter how great the cost to overall well-being—then this seems implausibly extreme. But the “moderate” pluralist alternative risks arbitrariness, due to lacking a clear account of where to draw the line, or precisely how much weight to give to non-utilitarian reasons relative to utilitarian ones.6
Bang the drums of war. We live in a morally unusual world. During high-stakes emergencies like fighting a just war, many activities that would otherwise seem above and beyond the call of duty, or even wrong, may instead be morally required—including risking your life, imposing burdens on your loved ones and leaving them for years, and killing enemy combatants. But in fact our “ordinary circumstances” involve horrific amounts of preventable suffering, with stakes as high as any war. Utilitarian verdicts may thus be bolstered by noting that much sentient life is (metaphorically) under siege, and that some moral heroism may accordingly be required to set things right.7
Make winning distinctions. Different versions of utilitarianism may be more or less vulnerable to different objections. For example, a version of the view that combines scalar, expectational, and hybrid elements may be better equipped to mitigate concerns about demandingness, cluelessness, and praiseworthy motivations. Objections to specifically hedonistic utilitarianism (such as the Experience Machine and Evil Pleasures objections) do not apply to utilitarians who accept a different theory of well-being.
Despite the silly labels, these are serious philosophical moves. We employ each, where appropriate, to respond to the specific objections listed below. (Students are encouraged, when reading an objection, to anticipate how to apply the utilitarian’s toolkit to address the objection at hand.)
Specific Objections to Utilitarianism
In separate articles, we discuss the following critiques of utilitarianism:
The Rights Objection
Many find it objectionable that utilitarianism seemingly licenses outrageous rights violations in certain hypothetical scenarios, killing innocent people for the greater good. This article explores how utilitarians might best respond.
The Mere Means Objection
Critics often allege that utilitarianism objectionably instrumentalizes people—treating us as mere means to the greater good, rather than properly valuing individuals as ends in themselves. In this article, we assess whether this is a fair objection.
The Separateness of Persons Objection
The idea that utilitarianism neglects the 'separateness of persons' has proven to be a widely influential objection. But it is one that is difficult to pin down. This article explores three candidate interpretations of the objection, and how utilitarians can respond to each.
The Demandingness Objection
In directing us to choose the impartially best outcome, even at significant cost to ourselves, utilitarianism can seem an incredibly demanding theory. This page explores whether this feature of utilitarianism is objectionable, and if so, how defenders of the view might best respond.
The Alienation Objection
Abstract moral theories threaten to alienate us from much that we hold dear. This article explores two possible defenses of utilitarianism against this charge. One recommends adopting motivations other than explicitly utilitarian ones. The second argues that suitably concrete concerns can be subsumed within broader utilitarian motivations.
The Special Obligations Objection
Relationships like parenthood or guardianship seemingly give rise to special obligations to protect those who fall under our care (where these obligations are more stringent than our general duties of beneficence towards strangers). This article explores the extent to which impartial utilitarianism can accommodate intuitions and normative practices of partiality.
The Equality Objection
Utilitarianism is concerned with the overall well-being of individuals in the population, but many object that justice requires an additional concern for how this well-being is distributed across individuals. This article examines this objection, and how utilitarians might best respond.
The Cluelessness Objection
Is utilitarianism undermined by our inability to predict the long-term consequences of our actions? This article explores whether utilitarians can still be guided by near-term expected value even when this is small in comparison to the potential value or disvalue of the unknown long-term consequences.
The Abusability Objection
Some argue that utilitarianism is self-effacing, or recommends against its own acceptance, due to the risk that mistaken appeals to the 'greater good' may actually result in horrifically harmful actions being done. This article explores how best to guard against such risks, and questions whether it is an objection to a theory if it turns out to be self-effacing in this way.
How to Cite This Page
Resources and Further Reading
- Katarzyna de Lazari-Radek & Peter Singer (2017). Utilitarianism: A Very Short Introduction. Oxford: Oxford University Press. Chapter 4: Objections.
- J. J. C. Smart & Bernard Williams (1973). Utilitarianism: For and Against. Cambridge: Cambridge University Press.
de Lazari-Radek, K. & Singer, P. (2017). Utilitarianism: A Very Short Introduction. Oxford: Oxford University Press. Preface. ↩︎
For a discussion of evolutionary debunking arguments, see Hanson, R. (2002). Why Health Is Not Special: Errors In Evolved Bioethics Intuitions. Social Philosophy & Policy. 19(2): 153–79. See also the discussion in our chapter on the Arguments for Utilitarianism. ↩︎
Public health experts recommend maintaining social distance of 6 feet or more from silly hypothetical cases at all times, lest they infect your understanding of what utilitarianism actually calls for in practice. If closer contact is required, protect yourself and others by first reading up on the utilitarian case for respecting commonsense norms, explained in Chapter 6. ↩︎
As further explained in our article on the rights objection, standard “counterexamples” to utilitarianism invite us to imagine that a typically-disastrous class of action (such as killing an innocent person) just so happens, in this special case, to produce the best outcome. But the agent in the imagined case generally has no good basis for discounting the typical risk of disaster. So it would be unacceptably risky for them to perform the typically-disastrous act. We maximize expected value by avoiding such risks. For all practical purposes, utilitarianism recommends that we should refrain from rights-violating behaviors. This constitutes a generalizable defense of utilitarianism against a wide range of alleged counterexamples. ↩︎
Hare, C. (2016). Should We Wish Well to All? Philosophical Review, 125(4): 451–472, pp. 454–455. See also the discussion in our chapter on the Arguments for Utilitarianism. ↩︎
By contrast, utilitarianism offers a clear and principled account of (e.g.) when constraints can reasonably be violated—namely, just when doing so would truly best serve overall well-being. Similarly for when it is worth damaging the natural environment, how to weigh small harms to many against grave harms to a few, and so on. That’s not to say that it will always be easy to tell what utilitarianism recommends in real-life situations, since it can be difficult to predict future outcomes. But it is at least clear in principle how different considerations weigh against each other, whereas other theories often do not offer even this much clarity. ↩︎
Of course, that’s not to suggest that the same particular actions are called for. Adopting a “war-like” stance outside of war might be expected to prove counterproductive. The point is just that the stakes are high enough that we shouldn’t necessarily expect truly moral advice, in our circumstances, to be comfortable. ↩︎