Moral realism has many senses, but call “practical moral realism” the view that there are moral consensuses for humans we can figure out and come to agree upon, which humans or most humans really rationally ought to accept, and some people right now are correct about them and others are wrong. Some people are doing and advocating for what they ought to do and advocate, and others are not, in the sense of instrumental ought outlined below. There’s an important sense in which moral relativism, non-cognitivism, and nihilism are all rejected.
People disagree about ethical questions: what should be done, and what is good and bad.
It’s also apparent that many people if they thought about it more would change some of their views on ethics. People can be quite dogmatic, but reflecting on or knowing more can change ethical positions in some cases. I’ve seen it happen.
There’s a list of improvements that I would accept to my own ethical judgment. One way to think about this is that there’s some version of myself that I would prefer more than my current self as my ethical compass. Possible attributes of this improved self include:
Knowledge of all the facts.
Excellent reasoning capacity
Feeling sympathy and experiencing empathy, but never being overwhelmed with rage or sadness.
A state of reflective equilibrium
Impartial with respect to certain kinds of ethical decisions- with no personal stake in the question under discussion.
Critically these are all changes I recognize as improvements. Thus I would trust a being improved in these ways more than I would trust myself.
Consider some ethical question Jane and Jasmine disagree on. We can ask, suppose Jane and Jasmine are altered to the version of themselves they would trust most, would Jane and Jasmine come to agree on that question? Would they converge?
We can add even more room for idealization- the idealized version of yourself being changed in a way that would make them, by their own lights, even better.
Say A would trust B more than themself on ethical questions.
Suppose B would trust C more than themself on ethical questions.
A should do what C tells them to do in ethical matters.
Since B trusts C more than themself, B would advise A to do what C would advise them to do.
Thus A would accept C’s instructions because of A’s trust in B.
In principle, this can proceed over indefinite levels.
Why is it important that the idealizations in question be idealizations that the person themself would accept? Because this retains a link with motivation. By definition, I am motivated to do what the version of myself I would most trust would do (that’s what’s meant by trust here). If idealization would bring us all together there is a sense- even if a little distant in practice- that we mostly want the same things- we’re just confused about them.
If you want to win a game of tennis, then there’s a sense in which you ought to train to win at tennis- an instrumental sense. This is the sense of ought in which you ought to do what will bring about your desires. Since I would want to follow the principles that would animate my idealized self, even more than I want to follow the principles that animate me, I instrumentally ought to do what my idealized self would do.
We come now to the convergence conjecture: If advised by their idealized selves, whose advice they trusted more than their own opinions, a large majority of people would agree on the vast majority of ethical questions.
If this conjecture is true, then there is a sense- only a sense- in which there is a “right” ethical position for humanity on all or most ethical questions.
Given the deeply different societies people have lived in, with very different ethical foundations, we can make a weaker version of the convergence conjecture, the social convergence conjecture: A large majority of people raised in our society, if advised by their idealized selves would agree on the large majority of ethical questions.
We can also make topic-specific versions of the question: A large majority of people if advised by their idealized selves, would agree (e.g.) on the justice or injustice of the death penalty
We cannot currently resolve these conjectures because we cannot implement idealization. It is unknown to what degree, what people, over what questions would converge through idealization. It is even possible idealization would diverge us further, as, given the opportunity to think and mull further more ideally under more ideal circumstances, we would reject certain social scripts that bind us together on ethics.
Watching would-be Ubermenschian conservatives thump their chests about how it’s good the strong dominate the weak has sapped my confidence in ethical convergence tbqh. The starting principles seem quite different.
The closest we have to an experiment on convergence is having philosophers spend a career pondering ethical questions. This does not seem to lead to convergence, although I have been unable to locate an experiment on whether or not philosophers are more likely to agree than the general public on everyday ethics. E.g., the sort of ethical questions that arise on r/amitheasshole.
It is possible as technologies of transhuman modification become available, we will- in at least a rough and ready way- conduct a practical experiment on convergence. We will see. If so, the results of that experiment will likely have more than merely theoretical implications.
To refresh: Moral realism has many senses, but, call “practical moral realism” the view that there are moral consensuses for humans we can figure out and come to agree upon, which humans or most humans really rationally ought to accept, and some people right now are correct about them and others are wrong. Some people are doing and advocating for what they ought to do and advocate, and others are not, in the sense of ought outlined above.
If the convergence conjecture is true, practical moral realism is true. If our idealized selves would agree with each other, but you and I don’t, then one of us is supporting a position that, if they understood things better, they wouldn’t support.
Since the truth or falsity of the convergence conjecture is (at least kind of) an empirical question, and since the truth of the convergence conjecture implies the truth of what I called practical moral realism, the truth or falsity of a kind of moral realism is an empirical question.
Practical moral realism due to ethical convergence could be true for some core set of moral claims, but not others. It could be, for example, that there is a fact of the matter that abortion is permissible, but no fact of the matter about whether or not you should push the fat man.
Practical moral realism, though it is a thesis in metaethics, has implications for normative ethics as well.
If the convergence conjecture is true, this has implications for how we engage in ethical decision-making. I might have reason to take the ethical views of others seriously in themselves. The conclusions that different people come to under different conditions might give me evidence about the conclusions I’d come to under more ideal circumstances. I won’t spell out all the peer agreement/peer-disagreement implications and arguments here, but you can probably see what I’m gesturing at. Generally speaking, I think various forms of ethical realism give us reasons to take the ethical views of others more seriously. This gives us a reason to take democracy more seriously, at least sometimes.
Discussion about this post
No posts
Two questions:
Moral realism is the idea that there are stance-independent moral facts. Moral anti-realists reject this, either because there are no moral facts, or because moral facts are stance-dependent. There doesn’t seem to be much discussion of stance dependence or independence in the post. Is the post inconsistent with a position that claims that moral facts are stance-dependent?
I suspect that moral realists and anti-realists face similar epistemic situations and practical choices. Both groups want to improve their understanding of moral facts. What is at stake in the debate?
> Moral realism has many senses, but call “practical moral realism” the view that there are moral consensuses for humans we can figure out and come to agree upon, which humans or most humans really rationally ought to accept, and some people right now are correct about them and others are wrong
I was disappointed that this post wasn't about meta-ethical realism. So wrote that post instead:
https://ariethoughts.substack.com/p/why-the-truth-or-falsity-of-moral