Rating the AFLW Teams (Post R2)
Today, a couple of new voices share their analysis here on MoS: sport lovers Declan Walpole and PhD student, Robert Nguyen, who've teamed up to have crack at rating the AFLW Teams after two full rounds, and using their ratings to predict the results for the upcoming round.
Over to them ...
The inaugural season of the AFL Women’s (AFLW) competition has exceeded virtually all expectations. Who could have guessed that the first ever match between Carlton and Collingwood at Ikon Park would draw a crowd of over 22,000; a sell out resulting in 1,000 people being locked out of the ground?
It would have been equally hard to predict the on-field performances of each team at the start of the season. With no form from previous seasons to inform expectations, the preseason predictions of Aussie Rules pundits have, almost invariably not come true.
With two rounds of football now under our belt, we're in a position to take a statistical approach towards objectively rating each of the teams’ performances so far - to using the results from the first 8 games to tease out some early rankings.
Estimating Team Strength Using Scoring Shots
Our objective here is to assign each team a numeric rating to estimate its relative strength. Like MoSBODS (and the MoSHBODS pretender it's spawned), scoring shots will be used rather than actual points scored as a proxy for team performance to acknowledge the fact that team conversion rates, to a first degree of approximation, appear to be largely random. Each team's rating will represent the number of scoring shots better or worse that team is relative to our arbitrarily chosen and alphabetically superior benchmark team, Adelaide. The difference between two opposing teams’ ratings is then expected to explain the difference in the number of scoring shots for a match played between them at a neutral venue.
To create the ratings, the results of AFLW matches are assumed to be caused, initially, by differences in underlying ability. As such, our ratings are estimated to best explain the results we’ve observed thus far by minimising the sum of squared errors between actual and predicted scoring shot differentials. The method we have opted for is then, unsurprisingly, referred to as a Least Squares Rating System.
Team ratings, of course, tell only part of the story: we need also to account for the phenomenon of home ground advantage, which has been empirically found to exist for virtually every sport and league in existence. To include this, we fit an adjusted rating model assuming a home ground advantage of 1.5 scoring shots for interstate clashes, and of 0.5 scoring shots for Melbourne derbies.
So, using this methodology, how does each AFLW team rate?
After two rounds of play, Adelaide appears to be the team to beat. Recall that we arbitrarily assigned them a rating of 0 to make the numerical problem mathematically tractable. The Western Bulldogs ran Adelaide close at the weekend, and, having previously dispatched Fremantle, our rating system has them as 2nd strongest ahead of two unbeaten teams in Carlton and Brisbane.
Of course, our ratings have been optimised on a very small sample size and so need to be treated as rough estimates. However, it is interesting to note that, so far, there has been no situation where A beat B, B beat C and C beat A. This lends some weight to the rank ordering of the teams above.
Round 3 Predictions Using the Adjusted Ratings
As mentioned, these ratings can be used to predict the results of future matchups. Let’s see how the system expects this weekend’s four scheduled games to play out.
So far, the average scoring shot in the AFLW has been worth 3.3 points. Accordingly, we have made the conversion from predicted difference in scoring shots to predicted margin of victory using this multiplier.
Three of the four matches involve interstate travel for the away team this week, and all the home teams are strongly favoured.
Lastly, we might wonder what the TAB has to say about all this (as at the time of writing).
We see that the bookmaker has GWS as outsiders against Fremantle, which might better reflect the teams' relative preseason expectations than their results to date - at least as far as our team ratings suggest. As for the other games, the TAB tends to agree with our relative rankings, although they are nowhere near as confident about the margins of victory in each game. Perhaps this reflects the naivety of fitting a model to only 8 data points.
For now then, take it all with a grain of salt ...