I respect @christapeterso immensely, but in this case I’m going to have to disagree because I am literally a speaker subjectivist.
First of all, let me acknowledge my simplicity in these matters. This is very difficult philosophy, at the intersection of many hard problems. I’m barely even a professional philosopher, and my work is far from this. This is just my sense of things. Perhaps you’ll find it obviously wrong. Also, even if I were a much more gifted philosopher, this is a blog and hence much abbreviated.
All that said, I hold the view:
Statements about morality are really statements about one’s own values
Although I prefer even the straight version to most accounts of moral language, I’m inclined to like even better a modified and more complex version that holds:
Statements about morality are really statements about the values that a better version of you [better as defined by you] would hold.
What does better mean? Are we illicitly sneaking in moral content with the better version of you constraint? Not at all. A better version of you is whatever you think would be a better version of you. If you would trust the moral views of a rational and perfectly informed version of yourself more than your own values, then the values of such a being are the truth conditions of your statement.
This is not to say merely that moral statements “reflect nothing but” ones own values, or any such more limited claim that a moral error theorist or expressivist might agree with. Rather the claim is that we should construe moral language as literally being about one’s values, or the values of an idealized version of oneself, much like statements about clouds are statements about watery objects in the sky.
Why do I say “we should construe moral language” in such a way? Either moral language does or doesn’t mean what I say, surely? I think that’s too hasty. The situation of moral language is a bit of a best deserver situation. Intuitively we’d like moral language to do a lot of things. A complete list of desiderata is impossible, but just as a start we’d like it to refer to:
A realm of objective truths, or if not that, a series of ersatz equivalents [if not objective truths, at least intersubjective truths true for all rational and/or self-aware agents- or if not that at least intersubjective truth for all humans- or if not that at least intersubjective truth for all non-pathological humans.]
That gives anyone who recognizes a moral truth that you should do x a reason to do x. That reason should be strong and maybe even overwhelming in the sense of always trumping other considerations where they conflict
This realm of truths should be knowable, or at least an important fragment of it should be knowable.
And this realm of truths should include at least the core and obvious views that you hold about morality (at least pro tanto you shouldn’t kill, maim etc., and at least pro tanto you should be compassionate etc.)
The problem is that it’s really not clear that anything meeting all these criteria exists. It’s not clear that there’s something in the world that gives you a reason to X, whatever your values and situations. No one has yet came up with an account of such reasons that I find satisfactory.
In light of that, I suppose you’ve got two choices with regards to moral semantics. You can scrap them and go for error theory, or you can say “We’re going to find the best thing we can to fill in the gap.” I think if you’re going to go the second route, speaker subjectivism is a pretty good bet. The advantage of speaker subjectivism is that it gives you 2, 3 & 4 and maybe even gives you a (very weak, ersatz) version of 1.
It gives you 2, because the fact that a version of you that you would consider more perfect than yourself would want you to do X inherently gives you reason to do X.
It gives you 3 because while moral judgements can be wrong on this view (your better angel might have views you wouldn’t expect) in a lot of cases you can anticipate their view. You can reasonably guess they wouldn’t like unnecessary killing for example.
And it gives you 4 because it’s very likely that a perfected version of you would agree with you on moral claims so obvious as to be platitudinous, even if not on some of the details.
Unfortunately, it doesn’t give you 1.
A list of some of the theoretical benefits of speaker-subjectivism
Here are some of the more general perks of the speaker-subjectivist semantics I’ve described:
It gives an account of why moral statements can be true- and thus avoids the weirdness of moral error theory, implying that all positive moral statements are false.
It avoids the odd implication of many theories of moral semantics that moral statements are neither true nor false.
It avoids having to do “anything fancy” in terms of the semantics. We don’t have to pull any moves like saying “well actually, truth in the case of moral language means something different to truth elsewhere”. Truth is still about facts, even if not the facts we might have initially thought.
It gives an account of why believing that X is wrong necessarily gives one a reason not to X. If I know that a version of me that I think is superior would dislike me Xing, that gives me a reason not to.
It accounts for how we can know moral truths- and know them with a high degree of confidence even in the face of moral disagreement.
It has good prospects for explaining what happens when people argue about morality. Sure, when I say “slavery is wrong” and Bob says “slavery is sometimes right” we are not literally contradicting each other, because we’re each making statements about our own values, but the practical clash of two people explains why we might want to argue with each other. It’s sort of a contradiction in the same way “I want to go to the beach” “no I want to go the shops” is sort of a contradiction. This isn’t perfect- ideally these statements would be literally contradictory- but it’s something.
It explains why we debate morality and can do so profitably. We have enough common ground in our ethical feelings, and very possibly, our ideal selves have even more.
It explains how we debate morality- specifically the method of cases and reflective equilibrium- which are not, despite the common misconception, original to 20th century analytic philosophy. These methods seem very compatible with the semantics I described- very compatible with the the behavior of people who are (perhaps without knowing it) trying to figure out what an ideal version of them would think and trying to approach that by considering a range of cases and principles.
The catch: Relativism
However, the catch is relativism. When I say “slavery is always bad” and when a Roman says '“slavery is sometimes good”, what we’re saying might both be true. Although the experimental philosophy is a little bit unclear on this point, most of us would hope for a moral semantics on which when I say slavery is bad, and Bob says slavery is good, I’m right and he’s wrong.
Perhaps the view only implies a mitigated form of relativism. It’s possible that if we were all idealized versions of ourselves- better angels in the terminology of this post- we might tend to agree on far more moral questions than we do now. It is even possible that virtually all or all non-pathological humans would agree on the core moral questions. In this case, relativism would be (sort of) false.
But this is just a hope. Idealized versions of ourselves could disagree even more than we do now. This is an empirical question, one that we may never be able to resolve.
The other catch: What kind of values
Earlier I appealed to your “values” to define moral truth- but what kind of values? The fact that a perfected version of me wouldn’t like chocolate mint ice-cream doesn’t prove that if I said “chocolate mint ice-cream is immoral” I’d be right. Neither I nor an idealized version of me dislike chocolate mint ice-cream in a particularly moral way.
Clearly what we mean is moral values but it seems regrettable to give an account of moral truth that depends on an already existing notion of moral values- a bit circular frankly. Now there’s lots of room to iron out a non-circular conception of moral values- but that’s a challenging task that I leave for elsewhere.
[The clue or distinction in the case of mint chocolate ice-cream is going to be that I don’t dislike it for other people, just for myself, but other people have worked it out at length elsewhere.]
The other, other catch: We don’t take ourselves to be talking about ourselves when we do ethics
We don’t take ourselves to be talking about ourselves when we do ethics- yet this theory implies we are. Exactly how serious an objection this is, I’m not sure. I think philosophers are pretty jaded by this kind of objection- psychology and semantics can be quite different things, but I think for a lot of ordinary people this will be an important objection.
My view is it’s a fairly minor problem if it’s a problem, but your mileage may vary.
An argument for relativism
I want to draw attention to an advantage of relativist accounts that is often forgotten- an advantage that will perhaps soften the blow.
Suppose I meet a slave owner and we argue about the morality of owning slaves. I am very confident that it is not a good thing for some people to be slaves, whereas he is confident that it is.
I can be very confident that I am right, even though I am aware of all his lines of reasoning, and his mine, and the disagreement still persists, and even though there is no sense I can point to in which my epistemic position is superior to his. It seems much easier to explain this if some kind of relativism is true than if it’s false. If universalism were true, then I should be troubled by the symmetry of our epistemic positions- yet clearly I have every right to be extremely confident in my position. Now such confidence might have a universalist explanation, but it seems to me easier to explain on a relativist semantics.
To eliminate or not eliminate: Double relativism
When it comes to designing semantics for folk discourse, there’s a lot of indeterminacy in exactly how the folk discourse is used. Also, there may be nothing in the world which fulfills all the desiderata of the folk discourse.
The question then in the face of this indeterminacy, and our inability to find something that meets all our requirements- a question previously discussed by, for example, Stephen Stich, is whether we should eliminate the relevant discourse or reconstrue it, so that it works. Reconstruing it means giving up on some desiderata of what a semantics of the discourse should do or accepting ersatz substitutes for those desiderata.
Stich suspected in the case of folk psychology the answer is likely to be indeterminate. That’s my sense in the case of morality to. A case can be made for construing moral talk in a speaker-subjectivist way but a case can be made for elimination. Moreover, in the case of ethical talk, the right semantics for moral might vary depending on which person, and which context one is talking about as for example (—-) has noted.
For Alice, given the strength of her commitment, say, to the objectivity requirement outlined above, it might be true for her to say that on the best moral semantics we can create, moral terms have no referent. Alice will therefore be a moral error theorist. For my moral speech though, I am happy with a speaker subjectivist semantics. I think it captures enough of what I mean by moral talk pre-theoretically to be acceptable.
Thus I endorse doubly relativist meta-ethics, relativist about ethical commitments, but potentially relativist about meta-ethics itself.
Addenda: Conversation with Lance
Lance S. Bush writes
I’m a moral antirealist and routinely find myself defending antirealist positions from bad objections, even if I don’t endorse those antirealist positions. This is true of speaker relativism as it is of error theory and noncognitivism. So, thanks for the post, and it’s good to see more people expressing views yours. The original tweet asks if anyone is a speaker relativist. Such remarks are often unclear. But if the claim the one you offer:
“Statement about morality are really statements about one’s own values”
…these sorts of remarks trouble me. Which statements, exactly? Is this an empirical claim about what people in general mean when they engage in moral discourse? Or a psychological claim about what they think?
If not, and it is instead a kind of definition, e.g., “if something is a moral claim, then it is a statement about the speaker’s own values,” and we go out and observe people saying things like “murder is wrong,” then are these people making statements about their own values regardless of whether that is their intention? For instance, suppose someone is a moral realist and is not intending to communicate their own values, but is instead intending to communicate what they consider to be stance-independent moral facts. Are their intentions irrelevant, and their statement nevertheless purports to be a statement about their own values? Or, instead, is this person not actually making a moral statement?
The same holds for your second definition::
"Statements about morality are really statements about the values that a better version of you [better as defined by you] would hold."
I prefer accounts like this to realist accounts, since I can at least make more sense of this account than of realist accounts, but it’s again unclear to me which statements this kind of account applies to. This isn’t what I mean when I make moral claims, for instance. Unless what I say doesn’t express what I mean. You say:
"Rather the claim is that we should construe moral language as literally being about one’s values"
…This sounds more like a prescriptive than a descriptive claim. That is, it looks like a proposal that we use moral language this way, rather than a claim that we in fact do so. The remarks that follow seem to reinforce this. But if this is the intent, this wasn’t clear from the outset, so I’m a bit puzzled as to whether this a descriptive or prescriptive claim. If one’s position is “we should use language this way” it seems a bit strange to say that “moral statements mean X” rather than “I think it’s a good idea to use moral statements to mean X.”
Next, you mention some desiderata we might want to get out of moral language, including that it is objective (or “ersatz equivalents”), but your proposal “unfortunately” doesn’t give you this. Why is this unfortunate? I don’t find such accounts satisfactory myself, but I also don’t find anything unfortunate about their absence. I’m puzzled when people do grant that there’d be something good or desirable about objective moral values.
In addition, there are some other concerns I have with the advantages of your proposal. If it is a prescriptive proposal, some of the listed benefits wouldn’t make much sense, since they seem to be advantages in account for how people already speak and act when it comes to morality, e.g., “It explains why we debate morality and can do so profitably. We have enough common ground in our ethical feelings, and very possibly, our ideal selves have even more.”
I really enjoyed the remarks on indeterminacy at the end. I argue for descriptive folk metaethical indeterminacy in my dissertation, and marshal a number of theoretical points and empirical findings to make my case.
Great post, glad to see people discussing metaethics here in a thoughtful and substantive way.
I respond
As you allude- this post is not a claim about what people think of themselves as meaning when they use moral terms. I guess I have this picture of semantics that works like this. Folk discourse presupposes the existence of a thing that fulfills a bunch of roles. To create the semantics for that discourse, we search for the thing that fills those roles. Only sometimes we find that there is nothing that fulfills all the roles in folk discourse- but there are some things that partially do the job. At that point, we have a choice- scrap the folk discourse, or, through a somewhat procrustean process, "squeeze" the folk discourse into whatever is it's 'best fit'. This is neither a fully prescriptive or a fully descriptive project- it's saying "hey, if we treat moral talk as being about this, then it kind of makes sense of many features of moral talk- even if it doesn't make sense of all them perfectly". It's in line with the general "Canberra plan" approach to philosophy I endorse. In the conclusion, I'm saying "hey, if we take this Canberra plan approach", the "right" way to reconstruct moral language might vary from person to person.
The reason the absence of an objective realm of moral truths is unfortunate is that if it existed, we would thereby have a semantics for moral talk that fully matched with how people use moral talk, and also aligned well with what people take themselves as doing. This would be a "better deserver" as a candidate for a semantics of moral language.
I’m a moral antirealist and routinely find myself defending antirealist positions from bad objections, even if I don’t endorse those antirealist positions. This is true of speaker relativism as it is of error theory and noncognitivism. So, thanks for the post, and it’s good to see more people expressing views yours. The original tweet asks if anyone is a speaker relativist. Such remarks are often unclear. But if the claim the one you offer:
“Statement about morality are really statements about one’s own values”
…these sorts of remarks trouble me. Which statements, exactly? Is this an empirical claim about what people in general mean when they engage in moral discourse? Or a psychological claim about what they think?
If not, and it is instead a kind of definition, e.g., “if something is a moral claim, then it is a statement about the speaker’s own values,” and we go out and observe people saying things like “murder is wrong,” then are these people making statements about their own values regardless of whether that is their intention? For instance, suppose someone is a moral realist and is not intending to communicate their own values, but is instead intending to communicate what they consider to be stance-independent moral facts. Are their intentions irrelevant, and their statement nevertheless purports to be a statement about their own values? Or, instead, is this person not actually making a moral statement?
The same holds for your second definition::
"Statements about morality are really statements about the values that a better version of you [better as defined by you] would hold."
I prefer accounts like this to realist accounts, since I can at least make more sense of this account than of realist accounts, but it’s again unclear to me which statements this kind of account applies to. This isn’t what I mean when I make moral claims, for instance. Unless what I say doesn’t express what I mean. You say:
"Rather the claim is that we should construe moral language as literally being about one’s values"
…This sounds more like a prescriptive than a descriptive claim. That is, it looks like a proposal that we use moral language this way, rather than a claim that we in fact do so. The remarks that follow seem to reinforce this. But if this is the intent, this wasn’t clear from the outset, so I’m a bit puzzled as to whether this a descriptive or prescriptive claim. If one’s position is “we should use language this way” it seems a bit strange to say that “moral statements mean X” rather than “I think it’s a good idea to use moral statements to mean X.”
Next, you mention some desiderata we might want to get out of moral language, including that it is objective (or “ersatz equivalents”), but your proposal “unfortunately” doesn’t give you this. Why is this unfortunate? I don’t find such accounts satisfactory myself, but I also don’t find anything unfortunate about their absence. I’m puzzled when people do grant that there’d be something good or desirable about objective moral values.
In addition, there are some other concerns I have with the advantages of your proposal. If it is a prescriptive proposal, some of the listed benefits wouldn’t make much sense, since they seem to be advantages in account for how people already speak and act when it comes to morality, e.g., “It explains why we debate morality and can do so profitably. We have enough common ground in our ethical feelings, and very possibly, our ideal selves have even more.”
I really enjoyed the remarks on indeterminacy at the end. I argue for descriptive folk metaethical indeterminacy in my dissertation, and marshal a number of theoretical points and empirical findings to make my case.
Great post, glad to see people discussing metaethics here in a thoughtful and substantive way.
I’ve never really understood why people find the reasons stuff so attractive. I’m an anti-realist myself but don’t see why the realists should have to explain how moral reasons work. Why is it not coherent for the realist to just admit that moral considerations aren’t motivating? To complain that moral considerations don’t provide reason to act seems like just a less clear way of stating the better objection, that it’s not clear where moral considerations are supposed to get their “objective” bite in the first place. But if the realists somehow succeeded at showing that moral statements can be true in some universal sense, I don’t think they need to show that their truth provides motivation for action or anything like that