Brad Delong argues that Silver has things mostly right, and that Wang et al are underestimating the chances of an upset:
Silver's individual state-level effects appear to be much more correlated than Wong's--Silver believes that if something is leading pollsters to overstate Obama's true strength in Florida his true strength in Ohio is probably overstated as well, while Wong appears to believe that state-level variation is much closer to being independent.Similarly, if there were to be a sudden 1.5% shift in public opinion, that shift would be visible in all swing states roughly equally. So the Silver model as built in a 20-25% chance that the situation will change, or that the polling is a little off, or something else will shift the electorate (I, for one, greatly fear the return of the Bradley Effect now that white people have already voted for a black President once).
I haven't looked closely enough at Silver & Wang's computations to figure out how they're producing such different results. But there may be another possibility -- Silver's model may think Barack Obama's worst enemy may simply be time. If we find that between now and Tuesday morning, Silver's model rapidly converges on Wang's, then that suggests that 538 model gives much higher probability to the chance that public opinion will shift. It's possible that this convergence is already happening. In Silver's model, Obama has climbed from a 68% win a week ago probability to 77% today. However, the President's poll numbers have improved modestly in that time span as well, and it's basically impossible to separate those two factors without having direct access to his simulations. Still, keep an eye on this to see how close the odds are on Election Day.
Nate has a now-cast and a Nov6-cast, so maybe that allows a way to see if the time hypothesis is the explanation.
Good point! The gap between the two is very small -- 1.9%. So Nate thinks there is a ~1.9% chance that public opinion will shift enough to change the election, and a 20% chance that the polling is wrong.
The second number seems way high.
My understanding is: state-level polls indicate an easy win for Obama, national-level polls indicate a tossup, variance between polls leads to increased estimates of uncertainty. So the critical question is how much you allow national-level polls to alter your estimate of uncertainty. I remembering being pretty comfortable with Nate's approach when I read the description months ago, but I have forgotten the details, and I don't know what his competitors do.
Actually, it looks like Sam Wang doesn't use the national polls at all. That's certainly wrong in theory (good Bayesians should try to integrate all the relevant information, not just the easiest information to integrate!) but may work well in practice (if it's too hard to figure out how to use national polls, a straight average of state polls may perform better). Nate also uses economic indicators, but the weight on those gets pretty close to zero as the election approaches, so they don't matter much at this point (and again, I agree with Nate's approach a bit more -- it would make his website look much smarter in an election like 1988). At any rate, I think there is non-zero probability that the Gallup poll is right and Romney is winning pretty substantially -- but it's pretty close to zero. So the question is whether your prediction will perform better if you fix that probability at zero or try to estimate it.
Post a Comment