Eliezor Yudkowsky has written a good review on individual biases for a volume assessing the possibility of global catastrophic risks from AI, Nanotech, Biotech, etc. Yudkowsky’s particular concern is AI, and the likelihood that an AI will take over the world soon after its ascendance if it’s not carefully designed to care about humanity’s wishes. In the process of arguing on this subject, he has spent quite a bit of time becoming an expert on various aspects of epistemology: Bayesian reasoning, cognitive biases, in general how to think, act, and argue rationally. His writings in this area are particularly clear and usually directed to an audience that might not have thought carefully about reasoning.

This particular article is focused on the biases that people in general and experts in particular display on reasoning tasks. Yudkowsky shows that there is a large literature demonstrating that people are particularly bad at giving error bounds on their estimates: when asked to make predictions that they are 98% certain of, experts and non-experts both give answers that are correct only 60% of the time. It doesn’t matter whether you convince them that the results matter or you pay them for improved performance. If you show experimental subjects that they and others perform this poorly, their 98% confidence predictions improve to 80% correct.

Yudkowsky also discusses predictions about unusual events with large effects. There are a number of kinds of bias that lead to underprediction for these surprising outcomes, such as focusing on familiar cases. For instance, people strongly overestimate the occurrence of events commonly reported in the news. People also tend to neglect the possibility of things that haven’t happened recently. This partially contributes to our misunderstanding of the risks connected to dam construction. It turns out that building a dam raises the average annual damage since the damage is so much greater when flooding does occur.

In the face of all these kinds of biases, why do prediction markets work so well? There’s growing evidence showing that prediction markets predict well: if you look at collections of events predicted at the 80% (or 30%) level, you’ll usually find that close to 80% (or 30%) of them have occurred. All (or very nearly all) the people participating are subject to most of the biases that Yudkowsky discusses. Why does aggregating produce better information? I think that part of the reason is that the kinds of questions that these markets predict well are ones where information is revealed gradually and in a dispersed way. The kinds of events Yudkowsky worries about happen suddenly and without an early warning.

I think that also helps us select the kinds of questions that prediction markets should be applied to. If I’m right, prediction markets won’t help much in predicting individual acts of terror, natural disasters, or individual deaths (except from progressive disease.) But using them to predict technology developments, project success, sales numbers, or anything else where many people get subtle clues about the outcome as situations develop over time will be helpful. In all these cases, the market can pull together the evidence that people are observing, and make it visible in a central place.

Maybe we should admit that these markets are aggregating observations rather than predictions, but when the clues that become available over time are good (even if imperfect) indicators, the results are predictions that are useful to everyone without access to the dispersed evidence.