Player Experience Data: Another Look at Predictive Modelling
In an earlier post, we built a predictive model for game margins using teams’ individual and shared experience data (gathered from the Fitzroy R package) to supplement the MoSHBODS forecasts.
We found there that, whilst both shared and individual experience data were correlated with game margins, they provided no additional predictive value over and above what the MoSHBODS forecasts provided - at least when used in a randomForest with the following specification:
Game Margin = MoSHBODS Expected Margin + Own Average Individual Experience + Opponent Average Individual Experience + Own Average Shared Experience + Opponent Average Shared Experience
That result, we surmised, might be at least partly due to the fact that the MoSHBODS forecasts implicitly include experience to the extent that this experience is reflected in previous results and to the extent that experience doesn’t typically change much from game to game.
The data shows this assumption of generally small changes in week-to-week average experience levels is justified, albeit that there are some examples of relatively large changes, especially in the modern era.
If you think a little harder about that contention though, it suggests another potentially fruitful line of enquiry: to what extent does the change in experience from one game to the next offer some predictive value over and above MoSSBODS and MoSHBODS, neither of which could possibly incorporate the knowledge of any such change? We know that large changes in average experience levels are relatively rare, but it seems reasonable that, when they occur, they’ll be significant.
Now it also seems plausible that increases and decreases in average experience levels could have differential impacts so, for today’s model, we’ll fit a Multivariate Adaptive Regression Spline (MARS) model, which include hinge functions to model different linear relationships between our input variables and our target variable over different ranges of the input variable.
The underlying equation will be
Game Margin = MoSHBODS Expected Margin + Relative Change in Average Individual Experience + Relative Change in Average Shared Experience
where the relative changes are defined as the team’s change less the opponent’s change.
(Formulating the model in this way ensures that forecasts from the model will give the same result but with sign reversed if we forecast for one team rather than the other in a given contest. In hindsight, I probably should have done this in the earlier modelling work too, but the formulation I used actually gave the experience data a better chance of proving itself predictive.)
As we did last time, we'll build the model on a randomly selected 50% sample of all 15,398 games from V/AFL history (choosing whether to adopt a home team or away team perspective also at random), and then estimate the mean absolute error (MAE) of predictions from this model on the remaining 50% of games - the "holdout" sample. We compare that MAE with what we get from using MoSHBODS alone on that same 50% holdout sample.
The summary for the fitted model is shown at right and is interpreted as follows.
The first term after the intercept tells us that, for MoSHBODS Expected Margins below about -15 (for which the expression in the brackets is positive, which triggers this component), every 1 point decrease in the Expected Margins leads to about a 1.1 point decrease in the forecast.
The next term tells us that, for MoSHBODS Expected Margins above about -15 (for which the expression in the brackets is positive), every 1 point increase in the Expected Margins leads to about a 0.95 point decrease in the forecast. In essence, these two terms make some slight adjustments to the raw (adjusted) MoSHBODS forecasts.
It’s the next four terms, which we should consider in pairs, that tell us about the impact of changes in experience.
The third and fourth terms, together, tell us that
for relative increases in a team’s average individual experience less than 34.2 (which includes relative decreases in average individual experience), we should reduce our forecast margin by about 0.15 points for every 1 game in “lost” relative average individual experience.
for relative increases in a team’s average individual experience greater than 34.2, we should increase our forecast margin by about 1.7 points for every 1 game in “regained” relative average individual experience. Note that - as you can see from the rug plot in the relevant chart below - this term is relatively rarely triggered. In only 0.6% of all the games from history has a relative increase this large occurred. For the most part, then, we’ll be playing in the lesser-sloped part of the partial dependence plot.
The fifth and sixth terms tell us about the impact of relative changes in shared experience, and reveals that:
for relative increases in a team’s average shared experience less than 4.6 (which includes relative decreases in average shared experience), we should reduce our forecast margin by about 0.3 points for every 1 game in “lost” relative average shared experience.
for relative increases in a team’s average shared experience greater than 4.6, we should decrease our forecast margin by about 0.8 points for every 1 game in “regained” relative average shared experience.
That’s a somewhat counterintuitive result, and suggests that there’s is some optimal level of regained shared experience from one game to the next and that regaining too much shared experience - say by the return of a very experienced player - tends to have an immediately deleterious effect on performance. Bear in mind, however, that relative changes in shared and individual experience are highly correlated (+0.83), so a team regaining shared experience will tend to be regaining individual experience too and enjoying the performance lift from that quantified in the earlier paragraphs directly above the chart.
As we did for the previous model, we can also produce a variable importance plot for this model, which allows us to rank and talk in broad, quantitative terms about the predictive ability of all the input variables. This time, however, we’ll be using the vip package and function to make the assessment of variable importance. You can read more about the genesis of this measurement in this excellent pre-print.
We can see from this chart the overwhelming importance of the MoSHBODS forecast, and that relative changes in individual experience are more important than relative changes in shared experience.
Now ultimately, of course, what matters is predictive performance on the holdout sample, which is summarised in the table below.
The MARS model is superior to both MoSHBODS-based forecasts in 51 of the 122 seasons including six of the last eight and 17 of the 38 since 1980. Over the entire 122 season history, it has a slightly smaller MAE than the bias-corrected MoSHBODS forecasts. That’s a compellingly impressive performance.
(By the way, a similar model built using MoSSBODS in place of MoSHBODS produces an all-time MAE of 26.78 points per game compared to the MAE of 26.84 that we get if we use MoSSBODS alone.)
MODERN ERAS ONLY
So, we’ve seen that changes in experience are useful supplements to MoSHBODS forecasts in predicting game margins if we fit a single model to the entire history of the sport. It seems reasonable that the relationship between changes in experience and game margins might have changed over time though, so let’s re-estimate the MARS model using the same methodology, but including only games from the last two eras, which is from 1980 onwards.
We again fit the model to 50% of the available games, leaving the other 50% as a holdout.
What we get is what’s shown at right, which includes:
a similar treatment of MoSHBODS forecasts, although the knot is at a different value (+6.9 versus -15.3)
a somewhat similar treatment of the relative change in average individual experience, although now with only a single knot and an implication that decreases in relative average individual experience have a deleterious effect when those decreases are large enough, but that other changes in relative average individual experience, including increases, have no effect. In short: player outs matter, player ins don’t.
a complete absence of the relative change in average shared experience variable.
The relevant partial dependence plot appears below.
The MAEs for the entire 19 year period are 31.11 using MoSHBODS adjusted, and 31.10 using the forecasts from the model. Looking on a season-by-season basis and pitting the model against only the adjusted MoSHBODS forecasts, the model forecasts are superior in 18 of the 39 years, including 8 of the last 10.
SUMMARY AND CONCLUSION
We've found that experience, formulated as a relative change in average experience per player or in average shared experience per pair, offers some additional predictive power, especially if we adopt an all-time view, over what we obtain from using MoSHBODS forecasts alone.
For the most recent eras, however, only relative changes in average individual experience levels are relevant at all, and the gains over MoSHBODS alone are very small, though more significant in the last decade (about 0.21 in MAE).