March Madness 2019: Grading Our Model’s First Round Predictions

Chris Carlson / Associated Press

Last week I posted a model which predicted the outcome of all 67 games of the 2019 NCAA Division I Men’s Basketball Tournament. With the first round in the books, now’s a good time to see how our algorithm fared against the most well-known predictive models out there.

The two most popular models used during March Madness were created by FiveThirtyEight and SportsLine. While FiveThirtyEight’s content is freely available for public use, SportsLine charges $9.99/month for their statistical models — one should expect the highest-quality analysis from them. I checked how accurate these models were through the first thirty-two games of the tournament. I also included the accuracy of two other methods to see if utilizing these models had any value over other common methods.

MethodRound of 64 Accuracy
The Spax's Model22/32 (68.8%)
FiveThirtyEight's Model21/32 (65.6%)
SportsLine's Model21/32 (65.6%)
Picking the Vegas Favorite21/32 (65.6%)
Picking the Higher Seed20/32 (62.5%)

While our model was the most accurate, there’s not a significant margin between it and the other methods. So, what’s the point of using it? Well, the tight gap between our model and the other methods is partially due to the fact that the model rarely favors a double-digit seeded team; it just gives them relatively a relatively high chance of pulling off an upset. If a person used The Spax’s model but picked the lower-seeded whenever they are given a win probability of at least 45.0%, they would have accurately predicted the outcome of 27 of the 32 first-round games. That’s pretty good for a purely statistical model.

The five games that this method missed on are listed below.

Predicted WinnerActual WinnerActual Winner's Win Probability
(8) VCU(9) UCF37.30%
(7) Louisville(10) Minnesota30.44%
(5) Marquette(12) Murray State39.88%
(5) Wisconsin(12) Oregon42.03%
(4) Kansas State(13) UC Irvine36.96%

There are a few factors which made these upsets less surprising, but these factors are not accounted for in the model. It’s not easy to quantify the sheer ability of a superstar like Ja Morant to take over a game. Predicting Murray State to beat Marquette wasn’t an unpopular opinion because of Morant’s excellence. Also, the model doesn’t factor injuries into the equation. Dean Wade’s injury made Kansas State a threat to be upset by UC Irvine; lo and behold, the Wildcats lost. The Oregon Ducks were not an unpopular upset pick either — the five-seed Wisconsin Badgers were just two-point favorites. Oddsmakers also listed UCF as slight favorites, contrary to our analysis. The model failed to anticipate the likelihood of these games ending with the lower-seeded team on top.

Theoretically, though, our model should improve every year because the computer is given more data to use to predict future results. The model’s accuracy through the first round has been fairly promising considering that this tournament is its first time being used.

Comments

  Subscribe  
Notify of