The 2016 NFL season is over. How did I do in my predictions? Here are the links to all predictions made:
2016 NFL Predictions:
- Superbowl: 1 - 0
- Divisional Round: 1 - 3
- Wildcard Week: 2 - 2
- Week 17: 10 - 6
- Week 16: 10 - 6
- Week 15: 13 - 3
- Week 14: 11 -5
- Week 13: 8 - 7
- Week 12: 11 - 5
- Week 11: 9 - 5
- Week 10: 9 - 5
- Week 9: 8 - 5
- Week 8: 6.5 - 6.5
- Week 7: 8.5 - 6.5
- Week 6: 8 - 7
- Week 5: 8 - 6
- Week 4: 7 - 8
- Week 3: 9 - 7
- Week 2: 11 - 5
Season Recap
Here are some summary statistics:
- Correct Prediction %:
- Whole season (regular and playoffs): 61%
- Regular season: 61%
- Playoffs: 55%
- Statistical significance two-tailed binomial test (95% CI):
- Whole season:
- p-value: 0.0006
- CI: (0.55,0.67)
- Regular season:
- p-value: 0.0006
- CI: (0.55,0.67)
- Playoffs:
- p-value: 1
- CI: (0.23,0.83)
My predictions were accurate about 61% of the time. This was statistically significant. I didn't do so well in the playoffs, but I am not surprised. The playoffs are much harder to predict as the teams are better, more evenly matched, and there are fewer games to predict so each one counts for more in the percentages.
Comparison to FiveThirtyEight
In week 7 I compared my predictions to those being made by others. Because 538.com took a similar approach to automating predictions and was doing the best at the time for all sites that I could find, I took 538.com to be a good benchmark to measure against.
I should note that since then, the best overall predictions I have found have come from Elliot Harrison at NFL.com. Assuming that he predicted the Patriots to win the Superbowl, he went 179-86-2 over the course of the whole season. That is 68% correct predictions!
This was not an automated approach so a direct comparison between what I have done/what 538.com has done and what Elliot Harrison has done is not entirely justifiable. If we wanted to compare apples to apples, I would need to compare my own personal predictions (which may override my model's predictions) to what Elliot Harrison has done. I did not keep track of this, but it is on my list to do for next season.
Consequently, I will focus on 538. How did it do?
- Whole season: 64%
- Regular season: 64%
- Playoffs: 72%
538 did do better. That's ok. With more time and investment in this, I'd hope that I could match or exceed it's performance, but given other commitments, that hasn't been possible. Maybe next season...
Season Visualized
Here is a chart that shows my progress over the course of the season. You can see that I started well, but had some low points in weeks 4 - 8. After that, apart from week 13, the model did pretty well and the season win ratio increased. Playoffs were a bit erratic, mainly due to a poor division week prediction, but this didn't really effect the overall trend.
Here are comparisons of the weekly win ratios and season win ratios between 538 and me. On a weekly basis, I beat 538 6 times, I was beat 8 times, and I tied 6 times. So there was good back and forth on a weekly basis.
However, 538 beat me by more and lost by less, so 538 had a higher season win ratio pretty much all the way through.
Going Forward
Just like last year, there is always next year. I still have a long list of improvements to make, especially in code management and automation, and data management and pipeline. We'll see what happens :)
Thanks for joining me this season. See you next year!
Thanks for joining me this season. See you next year!
No comments:
Post a Comment