Game Statistics and the Dream Team
/Today, a new voice comes to MAFL. It's Andrew's. He's the guy that uncovered the treasure-trove of game statistics on the AFL website and he's kindly contributed a blog taking you through some of the analyses he's performed on that data. Let me be the first to say "welcome mate".
I have lurked on the sidelines of MAFL and Matter of Stats for a couple of years and enjoyed many conversations with Tony about his blogs. I found myself infected with curiosity and so, with gratitude to Tony, here's my newbie blog post.
But I must make a confession before continuing. I play football with a round ball. I've watched AFL occasionally and attended a couple of games but largely come to the stats from a position of ignorance; a la the Dunning-Kruger effect I will consider this an asset.
The Game Statistics
The statistics section on the AFL site provides game statistics and other records. These are the same statistics that Tony explored in an earlier blog and include, for example, the number of marks, hit-outs, and one-percenters registered by each team in a game.
This data is available starting from 2001 on a season-average, season-total and a per-game basis (with a few games missing), and even for each player in each game of each season. You can never have too much data!
Today I ask a few basic questions of the data:
- What is the relationship at the team level between game statistics and end-of-season ranking? Put differently, what helps win a season?
- Is there a relationship between these game stats and Tony's well-honed MARS Ratings?
- How dreamy is the "Dream Team" statistic provided by AFL?
As a reminder, here's the list of AFL statistics that we'll be exploring:
- behinds
- bounces
- centre clearances
- clangers
- contested marks
- contested possessions
- disposal efficiency
- disposals
- Dream Team points
- frees against
- frees for
- goal accuracy
- goal assists
- goals
- handballs
- hitouts
- inside 50s
- kicks
- marks
- marks inside 50
- rebound 50s
- stoppage clearances
- tackles
- total clearances
- total possessions
- uncontested possessions
For this blog we'll use the whole-season game average version of the data. (The AFL whole-season average statistics include Finals. This probably has a minor effect on the averages to the extent that (a) strategy might differ and (b) the average team strength is higher in these games. Alas, the missing stats for some games make it tricky to create a true regular season average. So we proceed with slightly dirty data - as we often must.)
Finally, we need “objective” or “outcome” measures of season performance. I choose:
- Ladder position: Lower is better. The range is 1 to 16, 17 or 18 depending upon the season, with Gold Coast and then GWS joining in recent years.
- Competition points: Higher is better. Again, there is a slight variation in the number of games played each season.
- MARS Rating at the end-of-season as provided by Tony. Higher is better.
Correlations
To get the lay of the land, here’s the correlation of each game statistic with each season outcome measure for the period 2001-2012 sorted by correlation to ladder points.
An unremarkable conclusion is that teams that score more goals do better that teams that score fewer goals. Behinds also fall into the "of course" category.
That leaves inside 50s, marks inside 50 and rebound 50s as decent predictors of ladder position (with rebound 50s sporting a negative correlation because they reflect the fact that the opposition has registered an inside 50 themselves, which is a positive for the opposition and a negative for the team in question). I expect a lot of footy coaches tell young players to get the ball inside the opposition's 50 and keep it there until they score a goal. It’s no surprise that this works for the pros too.
Amongst the remaining game statistics we see smaller correlations with the number of kicks, contested marks and a few other metrics, with each showing correlations with absolute value above 0.25.
And, for all the fuss about them, who would think that generating more clangers might slightly improve a team's season outcome, or that offering an opponent free kicks might make little difference either way, or that those allegedly commitment-defining one-percenters might have little direct bearing on whether a team finishes first or last?
Punditry
The right column marks the statistics that are oft-discussed at half time by the pundits. On the basis of this analysis, most of the pundit's statistics bear little relationship to a favourable season outcome.
Now it's possible that the game statistics that are associated with season outcomes might be different from those that are associated with winning individual games. To that end we next analyse game statistics on a game-by-game basis, win-loss and score difference point of view.
Linear Models
First, a caution: my approach to the modelling challenge risks overfitting because, with 16 to 18 teams playing each season and 12 with seasons to model, we've relatively few data points but we've 25 or so candidate regressors.
Instead of including all the variables without question I've instead built a recursive linear modelling function that adds one variable at a time based on the greatest reduction in adjusted R-squared. The table below shows the 1st, 2nd and 3rd variables selected by this process for various seasons and outcomes.
As well, I exclude some of the AFL game stats in my modelling. Goals, behinds and goal assists almost directly predict team scores so I've chosen to do without these variables for now. Also, Dream Team points are excluded because, as explored later, they're defined as the aggregate of goals, behinds and a handful of other game statistics. Total clearances are excluded to because they are the sum of stoppage and centre clearances. Likewise, total possessions is excluded because it is the sum of contested and uncontested possessions.
Columns in the table above are ordered, roughly, by the metric's explanatory power.
Overall, variability in end-of-season MARS Ratings are more readily modelled using the full set of metrics shown than are other season outcomes. Ladder position is most difficult to model, most likely because the difference in team quality between the teams in 1st and 2nd is different from that between the teams in 8th and 9th, and different again from the difference between the teams in 17th and 18th. The ability to model competition points with the full set of metrics generally lies between our ability to model the other two outcome measures.
Measured by the adjusted R-squared, the models are all reasonably cogent. The single-year models for 2011 and 2012 are particularly strong, though the top 3 predictors differ substantially, no matter which outcome measure is selected. More generally, however, the top 3 predictors are reasonably consistent over the different time periods, with inside 50s, marks inside 50, and stoppage or centre clearances making frequent appearances.
(As an aside, the full-season models seem fairly robust but with a noticeable drop in the period 2004-2006, which is not shown here. Principal component analysis has revealed some curious drift in game statistics that deserves more attention.)
We see here once again that most of the pundits' favourite statistics don’t contribute much to the models' predictive power. Kicks statistics are modestly useful in predicting a team's season performance, but handballs, one-percenters, marks and many other statistics just don’t make the cut.
“Pretty Good” Model
We can build a “pretty good” model using only the three predictors that proved most consistently useful over the analysis periods described above: inside 50s, marks inside 50, and rebounds 50. The performance of this model is described in the last column of the table above.
The model is consistently effective in predicting teams' end-of-year MARS Ratings where it suffers only about a 3-5% reduction in explained variance relative to the "fully saturated" models that include all of the predictors listed above. The model's predictions of ladder position and competition points are reasonably good for the multi-year models, but less so for the single year models.
Dream Team
The AFL provides a statistic called “dream team points”. A quick analysis verifies that it is a linear combination of other statistics described by the following equation:
Dream Team Points = 6 x goals + behinds + 3 x kicks + 2 x handballs + 3 x marks + 4 x tackles + hitouts + frees for - 3 x frees against
Is this a good heuristic and can we beat it?
The correlation table earlier in this post showed that Dream Team points are only weakly correlated with the season outcomes of interest: the correlations are all approximately +0.30.
This is perhaps not surprising since, excepting goals and behinds which we know help win games, the Dream Team number is based on game statistics that have so far not proven effective in predicting season outcomes.
So, we ask, is there a better equation for Dream Team points that is more closely aligned with game outcomes?
- The first competing model is the Pretty Good Model I described earlier, which uses only Inside 50s, Marks Inside 50 and Rebound 50s. To make it a little more realistic I trained the coefficients for the three statistics over the years 2009 to 2011 and then tested it on 2012.
For reference, the equation for predicting a team's end-of-year MARS Rating is:
MARS Rating ~ 778.471 + 4.842 x Average Inside50s + 5.101 x Average Marks Inside50 - 2.284 x Average Rebound 50s
- It doesn’t seem fair that the Dream Team points equation incorporates the goals and behinds weighted exactly as they are in the game score. So, the second competing model is allowed to incorporate the goals metric and a handful of other metrics that are most predictive and is called the Better Pretty Good Model.
The fitted model now looks like this:
MARS Rating = 846.2224 + 12.0526 x Average Goals - 2.5841 x Average Rebound 50s + 0.8589 x Kicks - 6.8257 x Centre Clearances
It's no surprise that the new model chooses goals as it’s best metric.
(Let’s ignore for now the additional unfairness that the Dream Team metric has 7 variables to our 4.)
The table below lists the teams in the order in which they finished the home and away season in 2012 alongside their ordering based on MARS Rating, Dream Team points, and the Pretty Good Model and the Better Pretty Good Model equations.
The correlation row is a crude measure of similarity of order – effectively a Wilcoxon rank test.
It shows that end-of-year MARS Ratings very closely match the end-of-season ladder.
The AFL dream team points and the pretty good model are not very pretty.
I’d say the Better Pretty Good Model is a decent model.
Dreamy Team Points
So I offer an alternate dream team points formula. The coefficients in the optimal linear model - the Better Pretty Good model - are pretty ugly while the standard Dream Team formula has nice round 6s, 4s, 3s, 2s and 1s in front of each of the metrics, adding elegance. I offer a compromise:
Dreamy Team Points = 6 xgoals - 1.25 x Average Rebound 50s + Kicks - 3.5 x Centre Clearances
Using this formula to order the teams in 2012 produces a rank correlation of 0.858 with the actual competition ladder results, which is fractionally below the result for the optimal Better Pretty Good model. Noting that the model was based on 2009-2011 data and then verified against the 2012 results I’d say it’s a good deal more dreamy than the Dream Team model.
By way of conclusion, it is clear that the AFL game statistics contain information that is strongly predictive of season outcome. I hope to return to the next level of micro-analysis: do game statistics predict game outcomes and it is the same statistics that predict overall season performance. Meanwhile, Tony is revealing insights on these same stats in relation to wagering.