Download the

The Prediction – How Nate Silver Does It

Overview of how Nate Silver built his probabilistic model, what his results mean and how to use them going forward for predicting elections.

With the announcement of the retirement of longtime Utah senator Orrin Hatch, Mitt Romney appears as though he will be making a return to politics. Given that, this presents an opportunity to review Prediction in Romney’s last run, and a general view on how Nate Silver creates his prediction models.

Nate Silver beat them all. Joe Scarborough, the conservative host of “Morning Joe” on MSNBC, attacked Silver during the election. He apologized, sort of, acknowledging that Silver did get it right. Politico.com called him a “one-term” celebrity, saying, “For all the confidence Silver puts in his predictions, he often gives the impression of hedging.” (Later, Silver replied, “Politico covers politics like sports, but ‘not in an intelligent way at all.”)

Nate Silver, for those who don’t know, writes the FiveThirtyEight blog in the New York Times and is the best-selling author of The Signal and the Noise. In the book, Silver examines the world of prediction, investigating how we can distinguish a true signal from a universe of noisy data. It is about prediction, probability, and why most predictions fail—but not all.

What had folks attacking Silver was this: He predicted elections far more accurately than most pollsters and the noisy pundits on Politico, The Drudge Report, MSNBC and others. In his book, Silver described his model as “bringing Moneyball to politics.” That is, producing statistically-driven results.

Silver actually popularized the use of probabilistic models in predicting elections. Plainly stated, Silver produces the probability of a range of outcomes, rather than just who wins. When a candidate is at, say, a 90% chance of winning, Silver will call the race. What made Silver famous was his extremely accurate prediction of voter percentages. Pundits are almost always far off. However loath pollsters are to admit it, polls are almost always wrong, too. However, the average of polls is always more accurate. And a systematic probability model of the average of polls is almost always right. Think of it as political crowdsourcing.

Silver has built one of the best models out there. It’s accurate, consistent, and totally statistical. One advantage of being totally statistical is that his model can be replicated. This piece will review the process, explaining how Silver built his model, what his results mean and how to use them going forward.

The Basics

To run Silver’s model, here is what you will need

  • Microsoft Excel
  • A source of campaign finance and election data
  • Historical data to set “polling weights”

The first step is to calculate the “poll weight,” a measure of how much an individual poll counts when averaged with other polls. The poll weight consists of three values:

  • Recency: The older a poll, the lower the accuracy. A recency factor is calculated using a relatively simple decay function. Think of a poll as having a shelf-life, as does a pharmaceutical product or package good. The longer on the shelf, the less potent the poll is.
  • Sample Size: When shown on television, a poll might have a spread of +/- 4%. This spread is calculated using sample size. As a general rule, the larger the sample size, the more accurate the poll.
  • Pollster Rating: Silver alludes to how his polling does this in a 2010 blog. He does not, however, completely reveal his secret sauce. Without going into too much statistical detail, Silver uses historical data and regression analysis to create an accuracy measure for pollsters. Better pollsters have positive ratings; worse have negative ratings.

After the information is created, the next step is to create a weighted polling average. That is, take the mean of each poll within the state using the three weights described above. For smaller races, like congressional or state races, polling data might be scarce, particularly in uncontested races. However, presidential contests, as we know, offer a deluge of data to be plugged in. Silver does not say exactly how he combines the weights. I multiply them and then weight the polls.

Error

A weighted polling average, like all averages, contains an error and a weighted mean—the weighted mean is the exact result, the one number that pops out of the calculation. Error is the average distance of each data point to the weighted mean. In creating a polling prediction, we utilize the error around the weighted mean. The smaller the average distance around the weighted mean, the error, the more accurate the poll.

When examining what Silver considers important in interpreting error (below), we get a good snapshot of what makes a poll accurate, and what makes a poll less accurate:

  • Error is higher in races with fewer polls.
  • Error is higher in races where the polls disagree with each other.
  • Error is higher in races with a large number of undecided voters.
  • Error is higher when the margin between the two candidates is lopsided.
  • Error is higher the more days prior to Election Day the poll is conducted.

The Presidential Simulation

Silver predicts a lot of races: U.S. House, U.S. Senate and state governorships. The mother of all elections is, of course, the presidential. Since the 2016 election was an outlier in terms of prediction, let’s revisit the 2012 Presidential election for a better case study of Silver’s methods.

If I were going to construct Silver’s model for the Presidential election, I would set up 51 worksheets in Excel. Each state worksheet would contain the polling data and weights for each state. We configure the 50 worksheets so each poll has its result, its weight, and its error. For one run of a simulation, each poll would have one value, producing one weighted average for the state. The winner would then be declared. Excel would assign the electoral votes for that state. The front worksheet of my Nate Silver model would show all 50 states, tally who gets more than 270 electoral votes, and predict the winner.

However, if you run the simulation, say, 10 million times, each poll has results that bounce around within its error, spitting out 10 million possible outcomes. When arrayed in a cumulative chart, all possible results are shown.

Understanding What Silver Says

One week before the 2012 Presidential election, Silver reported that President Obama had a 73 percent chance of being reelected. Of course, the prediction caused howls from Fox News, particularly from its loud, partisan, and woefully inaccurate bevy of taking heads. But while they bayed in protest, none explained exactly what Silver meant.

Silver ran his model eight days before the election. As I stated earlier, polls become more accurate closer to Election Day. Let’s say that Silver ran his model 10 million times (with a new laptop this would take, oh, about four minutes). With states such as New York, California, Texas, or Georgia, the outcome was never in doubt. But in swing states such as Virginia, Florida, and particularly Ohio, the polls were too close to call. The winners may change for different iterations. If one runs the all possible iterations and combinations (and I would say that 10 million would probably cover it), then one can say how many times each side triumphs.

When Silver ran his models with the latest polls, 7.3 million times President Obama came out with more than 270 electoral votes; Mitt Romney won 2.7 million times. Thus, pronounced Silver, President Obama had a 73 percent chance of winning because he won 73 percent of the 10 million simulations. (On the last day before the 2016 election, Silver’s model showed Hillary Clinton winning 71% of simulations. When asked how he went wrong, Silver replied that we wasn’t wrong, that he had given Donald Trump a 29% chance of winning, which is not all that long of a shot when view objectively. This led, again, to calls that Silver hedges his predictions).

Predicting the actual vote percentages is a little more difficult. However, when one had as much data as Silver, and the ability the run the simulations millions of times, the actual vote count will converge to the real number, much like crowdsourcing guesses often converge to the result.

Practical uses of Silver’s model are abundant, and not solely on a presidential level. For example, if someone is working for a campaign in which the candidate is leading in the polls by 48 percent to 46 percent – a margin that is actually a statistical tie – a two months before Election Day, how likely is that candidate to actually win? And if he or she is behind by five points with one month to go, how much ground does the campaign really need to make up?

A prediction model can answer these questions. For example, if one candidate is leading by five points one month prior to Election Day in that or similar districts, 80 percent of the time he or she wins. This can be arrived at by looking at historical data. Or by plugging in all the current polls and financial data, and running the simulation 10 million times.

Why Models Like Silver’s Always are More Accurate Than Pundits

Political pundits like Dick Morris, Michelle Malkin, and Matt Drudge are paid to fill air time and give their opinions. Their opinions and predictions are almost always wrong. By contrast, Silver scientifically boils down real data and makes accurate predictions. The coming of age of probabilistic models in mainstream political modeling was brought about by Nate Silver. It is here to stay. It’s call math.

Please share...

One response to “The Prediction – How Nate Silver Does It

  1. This is such garbage and poppycock that it almost does not deserve a response. Weighting based on the credibility of individual pollsters, averaging the results of political polls that are released publicly, and then adding the so-called “secret sauce” violates so many basic rules of polling, and the public reporting of the data results, that it is laughable.

    Also, people like George Gallup, Bud Roper, Lou Harris and Dan Yankelovich were using probability processes in election and non-election polls going back to the 1950’s.

    All I know is that Nate was predicting Hillary would beat Trump until about 11 PM on election night when suddenly he was saying there was a larger percentage chance of Trump winning.

    Let me be clear. Nate does do some interesting mathematical things and is willing to take a lot of risks. However, like most pollsters dealing in the political realm he can also get it wrong at times. (Perhaps less often than a lot of other well-known pollsters). It is the nature of the beast that elections are impossible to call when they are decided within the polling margin of error of plus or minus three percentage points.

Join the conversation

Sign up to our newsletter

Don't miss out...
Get great insights content delivered straight to your inbox.
I agree to receive emails with insights-related content from GreenBook.


You can manage your email preferences or unsubscribe at any time. GreenBook protects your privacy under the General Data Protection Regulation.