The oldest problem
There is an ancient problem, and it stoppeth most things good. Some people care about the common good, others don’t, and we don’t know who is who. This problem is, to some extent, our unique heritage. I am sure chimpanzees vary in their level of group altruism, but since they have much less of it and since their social structures require much less of it, it just isn’t such a big problem for them. For humanity, the struggle to control defectors and the dangers of being ruled by them have written our history. It has never been harder than now- with complex and anonymous societies, distant rulers who we do not know personally…
Although it’s partly a matter of context and strategy selection, I do believe that there are stable individual differences in commitment to collective success. I also believe there are related differences in traits like cruelty.
At some point, I became intrigued by the following question:
What if we could know for sure that someone wasn’t a bad egg before bestowing on them leadership? Would things be better?
Now, the first response of the sophisticated reader will be to regard the whole thing as naff. They will object:
What do good and bad even mean?
Here, I mean something specific- a person’s commitment to the common rather than the private good, in situations in which they are called to serve the common good. Of course, there are going to be differences on how to serve the common good and on what the common good is, but we have at least a rough concept of what pursuing the common good as opposed to private goals, lusts, vendettas, etc. means.
But how do we define such altruism precisely?
Drawing out fine distinctions about who truly favours the common good is unnecessary. We don’t need to define the ranking precisely to have a sense of who is in the bottom decile and, therefore, shouldn’t be given power. For such people, it usually isn’t subtle.
Don’t you know that [clashing ideas/structural problems/inherent features of humans rather than individual variation] are what causes strife, not bad character?
In my experience, it’s often such factors working synergistically with bad character.
A) Bad people take advantage of structures that allow bad behaviour- and they do so in ways that are often not “what anyone would do in the situation” but go far beyond that.
B) At the level of institutions, bad structures are often at least partly due to malicious actors- either because
i. They were built that way in a damaging attempt to control malicious actors.
ii. Or because they are self-engineered playgrounds of the vicious will.
iii. Paradoxically, they are often both.
Structures like capitalism have a will of their own and characteristic tendencies, such as apathy towards inequality and externalities. These will cause problems even if such structures are staffed with the nicest of people, but the number of problems will be reduced.
Indeed, trying to engineer different and better structural contexts, governments, etc., will be easier if we know the cooperating parties are all aiming at some conception of the common good.
But this is all hypothetical, there’s no way we could determine who does and doesn’t care about the common good.
I’m not sure that’s true.
Technologies like fNIRS and EEG are becoming more accessible to the average person. Machine learning-driven reconstruction of deeper brain regions using fNIRS is in progress, although at a very preliminary stage.
As always with brain research, we are sorely limited by the power of our tools and the expense when we use the most powerful. The dream of high-resolution brain research using cheap tools inches closer. We already know multiple indicative features of psychopathy- e.g., reduced connectivity between the ventromedial prefrontal cortex and the amygdala. Presumably, we might be able to detect less extreme variants of unconcern for the public good.
Contemporary brain science of the form “this region causes this behaviour” is largely rubbish. It’s P-Hacked to death, and it uses small, often non-representative samples. We would need much larger and better studies before attempting anything I describe, and yet, it may not be that difficult. We may even be at the juncture now where one determined push could make this viable.
But if we managed to get this to work, it would be a nightmare! Inevitably, there would be false positives. And even were the measures accurate, do we really want to set up yet another social hierarchy of sneering and status- one of measured virtue?
This is what worries me.
There is a solution, although whether we would have the will to stick with it, I am unsure. Norms and laws:
A) Discouraging individuals from revealing their own measured pro socialitypublicly.
B) Forbidding employers from asking for information about the pro-sociality of current or prospective employees unless the role is one of a handful for which a measured pro-sociality check is appropriate- Politicians, Judges, High officers in corporations and NGOs, Senior Beaurcrats.
Might prevent this from turning into yet another status hierarchy.
I do worry about compressing people's life chances, even if they are selfish, and I am aware there will be false positives. Consider criminal record checks—which are by no means the same as character checks in my sense—they are far, far too widely used. They are appropriate in perhaps 1-2% of jobs but are used for perhaps 20% in my country. We do not want ubiquitous brain screening in the employment process.
But what if it doesn’t work
Assuming the process itself works, there are four possible failure modes I’m aware of:
A) It turns out that personal selfishness has little to do with society’s ills.
B) It turns out that selfish people are so good at getting into positions of influence that the light touch approach I describe does little to restrain them.
C) It turns out that nasty people are essential to the running of society- low-grade corruption, the crushing of enemies, a ruthless commitment to personal advancement over choosing the right policies - this is all good for us overall. Or other traits that are good for us cannot be separated from these. For an example of a kind of selfishness, “greasing the wheels” in a way that is ultimately beneficial to everyone, some have argued that the massive reduction of pork-barelling in America led to a massive increase in polarisation as it was no longer possible to come to an “understanding” on passing necessary legislation. Could similar effects arise if selfish people were greatly diminished in political power… maybe?
D) The harm done by discrimination against the less altruistic and by the self-righteousness of the more altruistic outweighs any good done.
What to make of these? A) is a largely harmless failure mode. B) Is likely harmless as well, although I suppose that nasty people might become even nastier to overcome the barriers put before them, I find it hard to think this will make things on balance worse C) Which is possible I suppose, does represent a risk, but our old friend, the small scale trial is available, D), as above, is my real worry.
There’s another failure mode I’m worried about, but it’s too speculative to put on my main list. What if, in coming to understand that the big decisions are being made by well-meaning people, we lose our rage against injustice?
Interlude: An autonomous trial
Suppose we had some process by which we could discern orientation to the public good. One thing we could do is gather together a sample from the 90% of the population who clearly aren’t sociopaths and get them together to see how they think things should be done when given time to discuss matters in depth. One could even gather such people into a common organisation and see what, if any, advantages accrue to its operation.
This combines with another idea of mine and Nicholas Gruen [who has not, and probably wouldn’t, endorse this piece]- a permanent body, selected sortition, keeping a running commentary on government. What would a miniature public, the only qualification for which was “definitely not a sociopath,” say about our public life? Could it act as a signal to non-psychopaths about what they would support if they had more time to think and weren’t being manipulated by bad actors?
Isn’t it Victorian to think that assessments of virtue should form part of public life?
The idea that what we are supposed to learn from the Victorians is a kind of separation of ethics and politics seems wholly wrong to me. Virtue matters, it’s just not about covering up piano ankles.
You’re aware that this sounds like the initial premise of a dystopian work, right?
Yes, but I don’t think that in itself means much.
Another way
There’s another strategy that will achieve the same thing, and without the same civil liberties problems. The difficulty is that it has other, much much worse civil liberties problems. If someone made an actually working lie detector- again, something wholly conceivable given advancing brain science- the world would change entirely.
A world with a working lie detector would be utterly alien- social debates, elections, relationships, business, etc, might all look completely different- and the changes in our consciousness and self-conciousness are hard to fathom in advance. It shocks me how little curiosity science fiction writers and other speculators have applied to the idea. The true power of a lie detector would not be in the detection of lies but in the verification of honesty. The certainty of honesty has a power about it, and I suspect we would be much more moved by each other’s words if we knew beyond all doubt that we were speaking the truth as we saw it.
I spend many hours a week on this blog and make it available for free. I would like to be able to spend even more time on it but that takes money. Your paid subscription and help getting the word out would be greatly appreciated. A big thanks to my paid subscribers, and those who share the blog around.
I see a distinct risk of Goodharting here. At a small scale, one of the main contributors to altruism is empathy, so this would load heavily on empathy. But in the sort of large-scale, senior positions you discuss, empathy is significantly less valuable as a predictor of goodness.
There's another failure mode similar but not quite the same as (C) which is: Perhaps certain antisocial traits necessarily correspond to certain socially useful traits; e.g. a neutral "ambition" trait corresponding to antisocial competitiveness but also prosocial grit.