The Alienation Objection
Most of us have a wide range of concerns and motivations. We care about our friends and family. We strive for success in our studies, careers, and personal projects. We become absorbed in hobbies, social or political causes, or cheer on the local sports team. And perhaps we have some general desire that the world as a whole become a better place. All of these things matter to us.
But suppose utilitarianism insisted that, of all your many motivations, only the latter desire—for an impartially better world—is morally legitimate. Everything else you care about is an unhelpful distraction. On this picture, the vast majority of ordinary human concerns are dismissed as selfish impulses to be suppressed or otherwise “managed” in whatever way would best serve the one morally legitimate goal of promoting the impartial good.
Such a view would seem deeply alienating. Imagine trying to live your life in accordance with such a theory, forcing your unruly desires into conformity with its narrow conception of moral legitimacy. You visit your friend in hospital, and when he expresses gratitude for your apparent concern, you coldly reply, “I’m not visiting out of personal concern. I simply calculated that I could do more good here trying to cheer you up, since the soup kitchen is already fully staffed today”.1
Such impersonal motivations threaten to starve our interpersonal relations of the human warmth that they need to flourish. By subordinating all other motivations to a single, overwhelming desire for the “greater good”, critics allege that impersonal theories like utilitarianism demand “one thought too many”2 of agents—alienating us from our loved ones, our personal projects, and any other goods that ordinarily seem to warrant direct concern.
We will consider two ways utilitarians might respond to this alienation objection. The sophisticated utilitarian strategy recommends adopting (or at least tolerating) motivations other than explicitly utilitarian ones. The subsumption strategy instead argues that direct concern for particular goods or individuals can be subsumed within straightforwardly utilitarian motivations.
Sophisticated Utilitarianism
Peter Railton introduced the sophisticated consequentialist as someone who is committed to an objectively consequentialist life, but is not especially concerned about thinking like a consequentialist.3 Instead, they may have many interests and personal concerns, including care for their loved ones, which they maintain as long as it is overall for the best. As Railton explains, “while [the sophisticated consequentialist] ordinarily does not do what he does simply for the sake of doing what’s right, he would seek to lead a different sort of life if he did not think his were morally defensible.”4
It’s empirically hard to know which motivations would actually maximize well-being for any particular circumstances. It’s unlikely that all our default dispositions would pass muster from the perspective of global utilitarian evaluation, so we cannot entirely dismiss calls for moral self-improvement. But it also seems unlikely that withdrawing from one’s relationships with loved ones (or other sources of personal fulfillment) would have good effects on global well-being. Trying to do this would risk severe depression, which is hardly conducive to a morally successful, high impact life. For this reason, if we step back and ask what sort of life one should lead, forming friendships and other special attachments seems likely to be endorsed by utilitarians, because these help motivate us to achieve our other goals, including maximizing global well-being (this response also applies to the special obligations objection to utilitarianism).
These facts about human psychology provide utilitarians with strong reasons to resist pressure to alienate themselves from sources of personal meaning and value. If it would prove counterproductive to strive for pure impartiality, starving yourself of ordinary human affection, then utilitarianism certainly would not recommend any such detrimental course of action. Even if it was in some sense more “rational” or “objectively warranted”, utilitarians do not intrinsically value rationality or warranted attitudes; they value well-being, and if overall well-being is better promoted by embracing some of our human foibles, then by utilitarian lights we should embrace those foibles.
Still, this response may seem unsatisfactory to some. Even if utilitarianism can endorse personal concerns for their usefulness, we may worry that this is the wrong kind of reason for such endorsement—or at least less than we might have hoped for. It’s not just that visiting your friend in the hospital out of direct concern would be more useful than visiting in order to maximize the overall good. Direct concern also seems intrinsically more appropriate.
Another way to bring out this worry is to note that the sophisticated utilitarian seems to exhibit what Michael Stocker describes as “moral schizophrenia”5—that is, a troubling disconnect between the (utilitarian) normative reasons they accept in theory and the (personal) motivating reasons that they act upon in practice.6 As a result, sophisticated utilitarianism might seem to condemn us to do the right thing for the wrong reasons, like someone who saves a child from drowning purely in hopes of getting their name in the newspaper. But of course visiting your friend out of direct concern does not seem at all like the “wrong reason”. Quite the opposite: the alienation objection stems from the conviction that this seems a much better reason to visit than any considerations to do with maximizing overall value. But if direct concern is really the right reason to act, then our moral theory should reflect this. Helping individuals, and not just “overall well-being”, should be recognized as an intrinsic moral goal by our moral theory. Our next section explores how utilitarianism might accommodate this idea.
The Subsumption Strategy
Richard Yetter Chappell argues that utilitarians can accept many personal reasons (to help particular individuals) at face value, in theory as well as in practice, and thus avoid both alienation and moral schizophrenia.7 The key idea here is that overall well-being only matters because each particular individual matters.8 So while utilitarians may speak, in abbreviated fashion, of wanting to promote overall well-being, this is just a way of summarizing a vast array of specific desires for the well-being of each particular individual. And while we have most reason to do what will best promote overall well-being, the particular reasons we have for so acting will instead stem from the particular individuals whose interests are thereby protected or advanced.
We may thus secure the desired result that the correct moral reason to visit your friend in the hospital is that doing so would cheer him up. If some alternative action would better promote overall well-being, then you would have stronger moral reasons to help those other individuals instead. But in either case, you may be properly motivated by direct concern for the affected individuals, rather than being driven by anything so abstract as the “general good”.
If successful, this solution promises to cut off the alienation objection at its root. According to this diagnosis, the alienation objection stems from a common misconception: the idea that utilitarianism must be fundamentally impersonal in its justifications—concerned with abstractions (like aggregate well-being) rather than concrete individuals. If concern for the overall good is instead built up out of concern for each individual, and these individuals have direct and foundational moral importance, then there is nothing in the theory that should lead us towards alienating ourselves from them. Direct concern for others is precisely what is rational and morally warranted, on this view of utilitarianism. We need only resort to impersonal moral motivations when our psychological capacities for direct concern run out, and we cannot give to billions of strangers the kind of personalized concern that—according to this interpretation of utilitarianism—they truly warrant.
One might object that we care far more about those we know than we do about distant strangers. Does it follow that our level of direct concern is excessive? Assuming that greater impartiality is truly warranted, it’s an interesting question whether we should “level down” by caring less about those we know, or “level up” by caring more about strangers. People are often quick to assume that, by utilitarian lights, we care too much about those we know. But why think that? We are far more aware of those we know. Strangers, by contrast, we tend to simply ignore. As a general rule, we should expect to more accurately appreciate the value of those we closely attend to than those we largely ignore. So we should expect that the amount of care that is truly universally warranted is closer to the amount that we currently give to those we know best.
So utilitarianism seems well able to vindicate direct concern for individuals.9 But this approach has its limitations. Many other common interests (such as sports and hobbies) are most plausibly of only instrumental value. And it seems plausible that deliberate attention to this fact could risk alienating the hobbyist, athlete, or sports fan from their favored activity.
Is this a problem? It may be a practical problem, but that can be addressed through sophisticated utilitarianism (as described in the previous section): Just as it does not pay for insomniacs to dwell on the importance of sleep as they lie in bed awake at night, so the happiness of a tennis player may be best served by “devot[ing] himself more to the game” than might seem strictly warranted from the point of view of the universe.10 Utilitarians have long stressed that excessive deliberation can be counterproductive,11 and instead recommend a more strategic, multi-level approach to agency and decision-making.
So the practical problem is resolvable. And there is no theoretical problem or objection here, so long as we can agree, on reflection, that our attachments to sports and hobbies are unlike our attachments to other people. Specifically: there does not seem any deep moral error involved in regarding hobbies as having merely instrumental value. Our hobbies do not really have intrinsic value themselves, but are at most useful means to other—intrinsically valuable—ends such as happiness or social camaraderie.
Conclusion
It would be deeply alienating for a moral theory to invalidate the overwhelming majority of our ordinary motivations, including moral motivations that stem from direct concern for particular individuals. Utilitarians may seek to avoid this fate via sophisticated utilitarianism or the subsumption strategy. Each approach has its limitations. But by suitably combining the two—insisting upon the subsumption of genuine intrinsic goods, together with a sophisticated approach to merely instrumental goods—utilitarians may be able to offer a full response to the alienation objection.
(Note that this article exclusively addresses the concern that utilitarianism might seem to invalidate our ordinary motivations. For the distinct worry that it too easily overrides our personal projects and interests, see the Demandingness Objection.)
How to Cite This Page
Resources and Further Reading
- Richard Y. Chappell, (2021). The Right Wrong-Makers. Philosophy and Phenomenological Research, 103(2): 426–440.
- Barry Maguire & Calvin Baker (2020). The Alienation Objection to Consequentialism, in D. Portmore (ed.) The Oxford Handbook of Consequentialism. Oxford University Press.
- Philip Pettit & Geoffrey Brennan (1986). Restrictive Consequentialism. Australasian Journal of Philosophy, 64(4): 438–455.
- Peter Railton (1984). Alienation, Consequentialism, and the Demands of Morality. Philosophy and Public Affairs, 13(2): 134–171.
- Michael Stocker (1976). The Schizophrenia of Modern Ethical Theories. Journal of Philosophy, 73: 453–466.
- Bernard Williams (1981). Persons, Character and Morality. In Moral Luck: Philosophical Papers, 1973–1980, Cambridge University Press.
This example is adapted from Stocker, M. (1976). The Schizophrenia of Modern Ethical Theories. Journal of Philosophy, 73: 453–466, p. 462. Of course, if you wanted to cheer up your friend, you would refrain from voicing such a callous thought aloud. But it seems bad enough to even be thinking that way. ↩︎
Williams, B. (1981). Persons, Character and Morality. In Moral Luck: Philosophical Papers, 1973–1980, Cambridge University Press. ↩︎
Railton, P. (1984). Alienation, Consequentialism, and the Demands of Morality. Philosophy and Public Affairs, 13(2): 134–171, p.153. ↩︎
Railton, P. (1984). Alienation, Consequentialism, and the Demands of Morality. Philosophy and Public Affairs, 13(2): 134–171, p.151. ↩︎
Stocker, M. (1976). The Schizophrenia of Modern Ethical Theories. Journal of Philosophy, 73: 453–466. ↩︎
This is related to, but subtly distinct from, the standard multi-level utilitarian distinction between one’s criterion of rightness and one’s decision procedure. Multi-level utilitarians note that heuristics (such as respecting rights) might help us to better achieve utilitarian goals, but this is a mere change in strategy, not a change in what they ultimately want. Sophisticated utilitarians go further, adopting non-utilitarian goals or intrinsic desires when this would have good results. This introduces a disconnect between theory and motivation that is not necessarily found in routine multi-level utilitarianism. ↩︎
Chappell, R.Y. (2021). The Right Wrong-Makers. Philosophy and Phenomenological Research, 103(2): 426–440. ↩︎
Chappell, R.Y. (2015). Value Receptacles. Noûs, 49(2): 322–332. ↩︎
More generally, the subsumption strategy may extend to whatever welfare goods the utilitarian recognizes as having intrinsic value. Depending on their theory of well-being, this might include just happiness, just desire satisfaction, or any number of putative objective goods such as friendship, knowledge, etc. ↩︎
Railton, P. (1984). Alienation, Consequentialism, and the Demands of Morality. Philosophy and Public Affairs, 13(2): 134–171, p.144. Note that Railton uses the tennis player example in a different context (and with a different contrasting motivation) than our use of it here. ↩︎
Pettit, P. & Brennan, G. (1986). Restrictive Consequentialism. Australasian Journal of Philosophy, 64(4): 438–455. ↩︎