John Cassidy of the New Yorker has a nice explanation of Nate Silver‘s statistical approach to forecasting elections and a balanced view of how much weight to give it. He contrasts that approach with that of David Brooks, New York Times columnist, who describes himself as a pollaholic, but who’s ultimately a skeptic.
So here are our information and forecasting choices, on election forecasting and on an endless variety of other concerns:
- Go on judgement, pure gut instinct, anecdotes, and what we personally see and hear.
- Rely solely on hard data, collected systematically, with uncertainty systematically taken into account.
- Use a combination of both, starting with anecdotes, judgement and gut instinct.
- Use a combination of both, starting with the data and balancing that out with judgement, including knowledge of the data and how they’re collected and analyzed.
Option 1 is throwing away lots of information and deliberately, if not consciously, relying on one’s own biases. We’re all familiar with the expression, “I’ll believe it when I see it.” But far too often people invert that logic and “see it when they believe it” – and don’t see it unless they already believe it. They not only risk being blindsided by reality, they risk being further confused about what blindsides them.
Option 2 is also throwing away information, including what may be the most up-to-date and what may be outside of that which is systematically gathered and processed. Henry Mintzberg critiqued Robert McNamara’s management of the United States’s conduct of the War in Viet Nam for exactly this type of failure. I’m paraphrasing here, but Mintzberg asks the question about who knew better who was winning the war, McNamara with his reams of data or the GI on the ground looking into the eyes of villagers? (See also Mintzberg’s Harvard Business Review paper Managing Government, Governing Management.) We’ve talked before and will talk more about the power of more modern information technologies that enable distributed and shared centralized hard data and decentralized soft data, enabling better decision making.
Note here that Silver is not making managerial or political decisions. Regardless of the domain, political or otherwise, others must.
Option 3 risks reinforcing ones own biases with seemingly hard data and, in doing so, it risks misleading others and ourselves.
Option 4, not surprisingly is my preference, not only in forecasting election results, but generally.
None of these choices will make you right all the time. Regardless of the electoral outcomes, there will be great crowing about one approach or the other having been right and that being “proof” of the rightness of that approach. That will be both wrong and misleadingly dangerous.
I think Silver is quite conscious of the weaknesses of his methods and the tradeoffs he makes in his modeling and that he tries to be explicit about them. And it appears that he’s constantly refining, learning and improving. Most importantly, he doesn’t make absolute statements. He expresses his forecasts in terms of probabilities. That’s really all we can ask.
Anyway, take a look at Cassidy’s article. It doesn’t have any equations.