Rating Teams Based on Ability to Generate and Prevent Scoring Shots
Three distinct pieces from three different Friends of MoS were the direct ingredients for this fresh (to MoS, anyway) take on team ratings:
- Joel, firstly, kindly invited me to work with him on developing an ELO style Team Rating System for a sport other than Australian Rules, which prompted me to review, refine, significantly improve and generalise the R script that currently powers MoS' MARS Team Rating System
- Then, a few weeks back, an e-mail from Brendan asked whether I'd ever considered building a Rating System for the AFL based on Scoring Shots rather than Scores, the insightful rationale being that we know, from previous analysis, that Scoring Shot Conversion Rates are, for practical purposes, unpredictable
- Finally, just a week or so ago, Drew e-mailed me about his own AFL modelling endeavours, offered me some interesting insights about my own results and approach, and, most importantly for today's blog, raised the possibility of creating separate ELO Ratings for teams' Offensive and Defensive abilities - something that's already done for American baseball and football.
Other ideas about the conduct of the analysis and presentation of the results for and in this blog have come from discussions on Twitter and analyses performed by others on the various sports analytics sites. If you think there's a relevant site or blog post of yours or of someone else's that I should link to, let me know and I'll look at including it in a future post.
Anyway, today I bring you MoS' Scoring Shot Based Offensive and Defensive Team Rating System (or MoSSBOD).
What is MoSSBOD?
MoSSBOD is an ELO-style Team Rating System in which each team has two Ratings defined as follows:
- Defensive Rating, which measures how many Scoring Shots it would be expected to concede to an average team at a neutral venue relative to the all-team Scoring Shot average (which has been 25.3 Scoring Shots per game for the period 2005 to 2014).
So, for example, a team Rated +3 would be expected to concede 25.3 – 3 = 22.3 Scoring Shots to an average team at a neutral venue.
- Offensive Rating, which measures how many Scoring Shots it would be expected to score against an average team at a neutral venue relative to the all-team Scoring Shot average.
So, for example, a team Rated -2 would be expected to score 25.3 – 2 = 23.3 Scoring Shots against an average team at a neutral venue.
Note that, given these definitions, higher Ratings are always better than lower Ratings.
Both Ratings are updated after each game using the following equations:
- New Defensive Rating = Old Defensive Rating + k x (Actual Defensive Performance – Expected Defensive Performance)
- Actual Defensive Performance = All-Team Scoring Shot Average – Adjusted Opponent’s Scoring Shots
- Expected Defensive Performance = Own Defensive Rating – Opponent’s Offensive Rating + Venue Adjustment / 2
- Adjusted Opponent’s Scoring Shots = min(Scoring Shot Cap, Actual Opponent’s Scoring Shots)
- New Offensive Rating = Old Offensive Rating + k x (Actual Offensive Performance – Expected Offensive Performance)
- Actual Offensive Performance = Adjusted Own Scoring Shots - All-Team Scoring Shot Average
- Expected Offensive Performance = Own Offensive Rating – Opponent’s Defensive Rating + Venue Adjustment / 2
- Adjusted Own Scoring Shots = min(Scoring Shot Cap, Actual Own Scoring Shots)
- Venue Adjustment = Home Ground Advantage + Interstate Advantage
There are a number of parameters in those equations, the optimal values for which I've estimated based on data for all games from 2005 to 2014, seeking to minimise the Mean Absolute Error (MAE) in the predicted game margin:
- We use 25.3 Scoring Shots as the all-team average for every game. It’s the average for all teams across the period 2005 to 2014.
- All teams started in Round 1 of season 2005 with Offensive and Defensive Ratings of 0. Teams that started later in the period start with a Rating of 0 prior to their first game.
- The value of k in (1) and (5) above varies according to the round in which the game is played, as follows:
- k = 0.149 for Rounds 1 to 6
- k = 0.089 for Rounds 7 to 12
- k = 0.113 for Rounds 13 to 20
- k = 0.079 for Rounds 21 to the end of the Home and Away season
- k = 0.112 for Finals
- Home Ground Advantages (HGAs) have been determined for all combinations of home team and venue for which at least 5 games of history exists.
- A generic HGA of -3 Scoring Shots is used for any game for which insufficient history exists.
- Interstate Advantage accrues to a home team when it is playing in its home state and its opponent is not. Its value has been assessed as worth +1.8 Scoring Shots.
- The sum of HGA and Interstate Advantage is called Venue Adjustment and the advantage that it represents is assumed to accrue 50% in the form of defensive ability and 50% in the form of offensive ability.
- Optimisation suggests that no Cap is required on teams’ actual Scoring Shot data. Consequently, in (4) and (8) above, no Cap is imposed.
Finally, a couple of other things are required to make the model operational:
- Teams carry 68.2% of their previous season’s Rating into the next season.
- For the purposes of measuring the MAE for a candidate set of parameter values, margins measured in terms of Scoring Shots are converted into margins measured in terms of Points Scored by assuming a 53% Scoring Shot Conversion rate for all teams in every game. This makes 1 Scoring Shot = 3.65 points.
TEAM RATING HISTORY
Applying this model to the entire period over which is was optimised, and then extending it into the first 14 rounds of Season 2015 yields the time series of Team Ratings shown below.
The red lines trace each team's Defensive Rating, with a value of zero applying to a team that would be expected to register the average 25.3 Scoring Shots against an equally average team met on a neutral venue. The green line provides a similar service but does so for each team's Offensive abilities.
A few observations are interesting:
- The Swans have often been only an average team Offensively over the entire period, but have excelled Defensively, most notably in the 2005-2007 period.
- The Pies of 2010 and 2011, and the Hawks of 2012 have had the most dominant Offences during the period.
- The Crows of 2006, the Saints of 2009 and 2010, and (as mentioned) the Swans of 2006 and 2007 have had the most dominant Defences during the period.
Now the Ratings are defined in such a way that the sum of all teams' Offensive and Defensive Ratings is always zero. However, to the extent that average Scoring Shot production has been above or below the all-time 25.3 figure, Offences or Defences might be said to have been more or less dominant, in aggregate, and this is reflected in the average Team Defensive Rating (or the average Team Offensive Rating) considered alone: The time series of this average is depicted in the chart at right.
It suggests that, relative to the whole period, defenses were somewhat weaker (and, therefore, offenses stronger) in the 2007 and 2008 seasons, and defenses were somewhat stronger (and offences weaker) in 2014 and so far in 2015. In many years, late season spikes are also evident as the Finals take place and Scoring Shot production tends to be lower.
The link between the average excess of Defensive over Offensive Team Ratings and Scoring Shot production is made evident when you compare the chart above with the one at left, which charts the difference between the average Scoring Shot per game figure of 25.3 and the actual average Scoring Shot production per game in every round. The correlation between the two time series is +0.785.
CURRENT RATINGS
This year only five teams are averaging more than 25.3 Scoring Shots per game and 10 teams are conceding fewer than 25.3 Scoring Shots per game, so it's no surprise that the chart of team Offensive and Defensive Ratings after Round 14 shows most teams with negative Offensive Ratings, and most with Positive Defensive Ratings.
Ratings are, however, adjusted for the quality of the offences and defences faced, so it's not the case that each team's average Scoring Shot statistics completely determines their positioning on the chart. The Roos, for example, have generated only 23.8 Scoring Shots per game but they've generated them against their opponents in such a way that MoSSBOD gives them a slightly positive Offensive Rating.
Nonetheless, the very high correlation between these Ratings and the raw Scoring Shot statistics should be noted. The correlation between Defensive Ratings and Scoring Shots conceded per game is +0.97, and that between Offensive Ratings and Scoring Shots generated per game is +0.94. I suppose that after 14 rounds of a season there's some natural level of "balance" in every team's schedule so far. It'd be interesting - though it's a job for another day - to see how closely these Ratings reflect raw Scoring Shot data at other times of the season this year and in other seasons.
PREDICTIVE PERFORMANCE
As well as being used to rate teams' Offensive and Defensive abilities, MoSSBOD can also be used to generate predictions. A margin prediction for an upcoming game can be created by:
- Calculating the Net Difference in the teams' Offensive and Defensive Ratings (ie Home Team Offensive Rating - Away Team Offensive Rating + Home Team Defensive Rating - Away Team Defensive Rating)
- Calculating the Venue Adjustment for the game based on the HGA and any Interstate advantage that the home team might have
- Calculating Predicted Margin = (Net Difference + Venue_Adjustment / 2) * 3.65
We can also create simple head-to-head predictions based on the sign of the Predicted Margin.
The results of performing this calculation for all of the games in-sample (ie for the period 2005 to 2014) and for the games from the first 14 rounds of 2015, are summarised in the table at right.
Mean and median errors bobble around zero for most of the years, as we'd hope, except for the first season, which was a calibrating season for MoSSBOD where all teams started with 0 Ratings, and for the three most recent seasons, which hints at a possible structural change in team performances. The latter will be something to keep an eye on over the remainder of the season.
The two accuracy columns suggest that MoSSBOD is a competent and sometimes even exceptional tipper of results. A 74% result last year would have comfortably outperformed the bookmakers (the TAB tipped at 72%), as would a 73% performance in season 2013 (the TAB was at 71%), while a 65% head-to-head record in the current year is about what you'd have achieved by predicting the higher placed team on the competition ladder in each game and would leave you about 6 tips behind the TAB. We should, of course, place more store in post-sample rather than in-sample model performances, and so should treat this year's 65% as more indicative of the model's head-to-head tipping strength than any of the performance statistics from earlier years.
That's also true though when we come to assess the model's Mean Absolute Error (MAE) performances where we find that its in-sample results are strong, but its 2015, post-sample performance is especially noteworthy. A sub-30 MAE for the season-to-date is a very good result. MoS' best Predictor, C_Marg, has an MAE of 30.9 points per game at the same point on the season.
There is a lot more we can (and will) do with this model, but for now I'll finish with one last table showing the predictive performance of the MoSSBOD model on a team-by-team basis:
What's apparent from this table is that, using MoSSBOD as your guide, the margin results of GWS and Gold Coast, the latter especially at home, have been hardest to predict (in the sense that they've had the largest MAE).
By comparison, the results of Sydney, Carlton, the Roos and St Kilda have been easiest, with all providing sub-28 MAEs whether playing at home or away.
I should note that the Mean and Median Errors are all defined from the home team perspective and as the Actual minus the Expected margin, so a negative average mean error in the Playing At Home section implies that we tend to overestimate the relevant team's margin, while the same in the Playing Away section implies that we tend to underestimate them from that team's point of view (and overestimate them from the home team's point of view). Given that we can see that we tend generally to:
- Overestimate Essendon's, GWS's and West Coast's abilities at home and away (because their game margins tend to be poorer than we predict).
- Underestimate Fremantle's, Hawthorn's, Richmond's and Sydney's abilities at home and away (because their game margins tend to be better than we predict).
FURTHER DEVELOPMENTS
There's a lot that could be done to further develop and improve the basic MoSSBOD.
Some areas that might be worth exploring include:
- Calculating rolling HGAs rather than using a single figure optimised over the entire decade.
- Investigating whether Interstate Status should be treated as a continuous rather than as a discrete variable. Maybe, for example, the trip to Perth from Melbourne has different effects than the trip from Sydney to Melbourne.
- Investigating team conversion rates to see if a more useful predictive model than mine can be built.
- Exploring weighted Scoring Shots, perhaps not using the 6:1 ratio for goals to behinds that's used in the game itself but instead using something smaller that still upweights goals relative to behinds.
- Investigating other ways of converting expected margins couched in terms of Scoring Shots into margins expressed in terms of points. The current model simply assumes that 1 Scoring Shot = 3.65 points.
That's more than enough for today's blog though.
As always, I'd welcome feedback.