5 Comments
User's avatar
John Quiggin's avatar

Amplifying what you've said, Bayesianism not only makes traditional epistemological positions obsolete, but raises a whole bunch of new problems. As well as "where do you get your priors", there's the whole question of bounded and differential awareness, which takes up most of my theoretical attention.

Expand full comment
Matanya's avatar

I've decided to update my priors based on the fact that when I present people with objects, and prompt them to theorize what category it way be a part of, assuming they don't run away, they will typically assign a category, correctly or incorrectly. Therefore, most people are category theorists, and I can safely assume that anyone I am asked about is probably a category theorist.

Expand full comment
William of Hammock's avatar

Are you familiar with Amartya Sen's concept of "Positional Objectivity?" I feel that it might both improve your argument and help distinguish subjective from objective bayesianism.

Sen's example is the claim that "from the earth, the sun and the moon appear to be the same size." In the objective bayesian telling, it might look something like "the product of all positionally objective priors equals the 'view from nowhere/everywhere' of objectivity. A weaker claim, then, would be that objective base rates and posteriors should be the products of positionally objective priors.

One can then separate them, with more or less success, from subjective and intersubjective, where the deviation from positionally objective products can be expected to have subjective and intersubjective priors. For example, if you were a part of a scouting team to see what size the moon appeared relative to the sun from each other planet in the solar system, you might have reason to distrust one of your team, and you therefore carry a subjective prior about their subjective prior, which is further complicated because, perhaps unconsciously, your original degree of trust was picking up on intersubjective signals from the rest of the team. This is problematic as it could be the rest of the team is untrustworthy or in error, so your distributed trust can poorly be sorted into even positionally objective priors.

Keep up the good work!

Expand full comment
O.H. Murphy's avatar

I feel like this is relevant: https://open.substack.com/pub/mindthefuture/p/why-im-not-a-bayesian?r=bhqc9&utm_medium=ios. I’ve personally been thinking about writing a post about how Bayesianism relates to traditional ‘causality’. I think very similar points can be made there about making ideas obsolete

Expand full comment