Bayesian decision theory works as follows. Let’s say we have a choice of action A and action B. We sit down to weigh up our options. Action A could lead to outcome 1 or 2, action B could lead to outcomes 3, 4, or 5. Assume the probabilities are:
i) If we take action A, there is a 40% chance of 1 and a 60% chance of 2.
ii) If we take action B there is a 30% chance of 3 and a 50% chance of 4 and a 20% chance of 5.
Further, suppose the outcomes have the following payoffs:
1=50, 2=80, 3=30, 4=100 and 5=0
We then, for each action, take the probability of each outcome, times it by the value of that outcome, and add it up.
0.4*50 + 0.6*80= 68
30*0.3 + 100*0.5 + 0*.2= 59
Since 68>59, the expected utility of action A is higher than the expected utility of action B. And so, on a Bayesian approach to decision theory, you should do A.
Bayesian epistemology and by extension decision theory has a problem, viz: it struggles to deal with the problem that there could be something we haven’t thought of (1) because it assumes the agent is aware of all possibilities. This alone creates grave problems for decision theory. But it gets worse.
I want to suggest that decision theory has a bias toward optimism and action rather than inaction when it is used in complex endeavors. The reason is very simple, in complex, difficult enterprises “something we haven’t thought of” is usually bad. This means any approach to decision theory that depends on assigning probabilities to outcomes and happenings we have considered and deciding based on expected payoff has an inbuilt tendency to be optimistic.
Let’s suppose I was involved in an extremely complex transportation engineering project. I’m relating my experiences with the project to you, and halfway through I say “And then something with major consequences no one had anticipated happened”. In the split second after I say that, but before I clarify what the unexpected event is are you thinking it’s going to be good or bad?
I suspect this pattern is one reason (though far from the only one) why significant deviations from costing in projects are much more likely to be upwards than downwards.
No doubt there’s some bias in what gets reported and talked about- doubtless that plays its role, but when it comes to large, complex projects, most very important events you hadn’t even considered as a possibility are bad. Because decision theory typically involves assigning probabilities to known possibilities- “known unknowns” to quote Rumsfeld’s now cliched phrase- it neglects unknown unknowns. This would be bad enough if unknown unknowns were random, but they’re not random, they systematically tend to the bad.
One way to deal with the problem is to have a possibility that represents “something bad we haven’t thought of happens”, and a possibility that represents “something good we haven’t thought of happens”- with the information filled in perhaps based on a rough estimate of how common totally left field disasters and windfalls are in similar ventures. Whether this is fully satisfactory I’m not really sure. Another option is to leave it out of the model altogether but build a culture of skepticism around the results of models on the understanding that they skew too optimistic.
There’s a long tradition that holds that a certain kind of reason is too optimistic and there is, I think, a cultural history of ‘giving fortune its portion’- a recognition and humility in the face of absolute disaster. Incalculable destruction beyond all anticipation is the theme of some of our oldest literary works, and wisdom is often equated with not being overconfident with respect to fortune. There is also a history of formal models which did not “give fortune its portion” leading to disaster- from the collapse of Long Term Capital Management perhaps due in part to over-reliance on Black-Scholes, to the subprime crisis- both of which have been attributed to an under sensitivity to big “out of model” risks.
Footnotes:
(1): Examples of the ‘the something we haven’t thought of’ problem include but are not limited to 1. The problem of new theories 2. The problem of old evidence 3. The problem of logical omniscience- what all these problems seem to me to have in common is that Bayesian epistemology and decision theory are founded on the assumption that the agent is aware of the space of possibilities.
Good observation. Taleb talked about black swans, but he didn't emphasize that they're mostly bad guys.
I think it's important to note that this is not actually a criticism of Bayesian decision theory (or evidential decision theory, as Wikipedia calls it), but rather a criticism of *improperly performed* Bayesian decision theory. When the agent correctly accounts for all possibilities they're aware of and assigns them the appropriate epistemic probabilities, this problem doesn't exist.