4 Comments
Jun 11Liked by Philosophy bear

> We know that someone participates in at least one bad activity but know nothing else about them. What is the probability they are a bastard?

>

> It’s quite low. About 20% I think.

No, it's over 50%.

You said that 10% of the population participate in each bad activity, and 90% of those are bastards.

With 20 bad actions, the absolute maximum number of non-bastards taking part will be 20*10*10% = 20% of the total population. In reality it will be noticeably lower because of overlap.

But practically all of the bastards will have done at least one bad thing. Their chance of doing any individual bad thing is 9%/20% = 45%, so with 20 independent things the chance of doing none at all would be (1-45%)^20, or less than one in 100,000. So the number of bastards in your "at least one bad thing" sample is roughly 20% of the total population.

Expand full comment
author

Thankyou! I stuffed up my toy figures. This section will be back up when I redo them.

Expand full comment

I think your Jerrow and Wenes scenario is not as clear-cut in terms of who should be considered "worse" as you're making it out to be (or perhaps here are some complications which explain why conventional morality gets the result it does). One complication is that Jerrow is a (mild) "sex pest," and sexual ethics is partly based on "sacred" values, so its taboos and complications are not evaluated purely logically. For the same reason rape is now considered more villainous than murder in Western countries, comparing sex(-adjacent) offenses against other offenses will often lead to strange conclusions. Another complication is the ease of "defending" your moral judgement to outsiders. Even though the group deciding between the two guys are all vegans, they still (in their head at least), are "justifying" why they don't want to be associated with someone. For the vegans, both men are objectional, and casting either out of the group would be a defensible call. But if a non-vegan asks why they don't like Wenes, they have to be prepared to argue the moral case of veganism, but if they ask the same question of Jerrow, the vegans can simply say "he's a sex pest." Even though the vegans have their own moral framework, they are still comparing the two fellows against "average social morality," and coming down harder on the one whose behavior more people outside their group would find more objectional.

Expand full comment
Jun 12·edited Jun 12

Some remarks, part by part:

> “Hamas deserve our unconditional support. Not because I agree with their strategy – complete disagreement with that – but the situation at hand is if you have no hope … nothing can justify what has been happening to the Palestinian people for 75 years.”

Unfortunate phrasing indeed. It seems, from the line about strategy that the support is not actually unconditional, and one must wonder what, concretely, support means given a "complete disagreement" about strategy, but however much the quote may be confused or contradictory, it is much less heinous than it first appears.

> The strongest reason against her expulsion is this. She would not have been expelled if she had been talking about the Israeli government in exactly the same terms:

This does not seem, in itself, to be an especially compelling argument against the expulsion. If one disagrees about whether weakness is an exculpatory factor morally (as I hold one should) the absolute magnitude of the harms is not especially relevant compared to the degree of wickedness of the actions taken. I would guess that the university administrators probably see the flipped statement supporting the Israelis as orders of magnitude less troublesome than the actual statement, but then the interpretation of that depends on one's judgment of the overall conflict, so the symmetry argument really just reduces to an argument about the overall moral nature of the conflict.

In the end, however, I have enough support for freedom of speech (your point 3) that I likewise highly disapprove of the expulsion despite my lack of endorsement of the symmetry argument, so in the end I suppose we end up in similar places with respect to the issue writ large.

On the spiritual reckoning (or lack thereof): it's good to see explicit noting of the rather bizarre implicit rhetorical move that seems to get made where it's seemingly taken as a background fact that "I disagree morally with Silicon Valley idea clusters ⇒ AI won't be significant".

> In the debate about how good AI will get, a lot of people are making a much more sophisticated point, namely that a logistic curve looks like a sigmoid curve until it doesn’t.

Should this be saying "an exponential curve looks like a logistic/sigmoid curve until it doesn't" or similar? The logistic curve is a sigmoid curve, so the sentence doesn't really make sense as-is.

> If anything, I am inclined to say Justin is worse. Yet most people, I think, will instinctively revile the character of Justin far more-

Should this say "I am inclined to say Mercia is worse"? The "yet most people" following it seems ill-fitting if you're actually agreeing with the hypothetical majority.

Another remark on the character judgments is that people may more readily form negative judgments about concrete, familiar harms (e.g. drunken assault, emotional manipulation, etc.) than about abstract or novel ones (e.g. tax fraud, meat-eating, etc.). Even though they would, if pressed, rate the harms as similar, the former categories are more emotionally compelling.

The note on designing utopias brought to mind the old Fun Theory sequence (https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-theory-sequence). Have you read it? It covers some interesting ground on the topic of the elements that would actually be necessary for a good-to-live-in utopia.

Expand full comment