An Analysis of Strength of Schedule for the Men's 2025 AFL Season
/The men’s AFL fixture for 2025 was recently released and, as is tradition here, we’ll analyse it to see which teams we think have an easier or harder schedule.
Read MoreThe men’s AFL fixture for 2025 was recently released and, as is tradition here, we’ll analyse it to see which teams we think have an easier or harder schedule.
Read MoreLast year’s men’s seasons results for MoSHBODS and MoSSBODS - as forecasters and as opinion-sources for wagering - were at odds with what had gone before.
Other analyses have suggested that the MoS twins might have been a bit unlucky in the extent to which 2024 was different from bookmaker expectations, and I’ve never been one for knee-jerk reactions to single events, but the performance has nonetheless made me think more deeply about the algorithms underpinning the two Rating Systems, more details on which were provided in this blog from 2020, and from the blogs to which it links.
Read MoreOver the past few blogs (here and here) I’ve been investigating different methods for untangling skill from luck in forecasting game margins and, in this blog, we’ll try another approach, this time using what are called xScores.
One source of randomness in the AFL is how well a team converts the scoring opportunities it creates into goals versus behinds. Given enough data, analysts far cleverer than I can estimate how often a shot of a particular type taken from a particular point of the field under particular conditions should result in a goal, a behind, or no score at all.
So, we can adjust for that randomness in conversion by replacing the result of every scoring opportunity by the average score that we would expect an average player to generate from that opportunity given its specific characteristics. By summing the expected score associated with every scoring opportunity for a team in a given game we can come up with an expected score, or xScore, for that team.
For this blog, I’ll be using the xScores created by Twitter’s @AFLxScore for the years 2017 to 2020, and those created by Twitter’s @WheeloRatings for the years 2021 to 2024.
Let’s look firstly at the season-by-season Squiggle results of using, as a game’s margin, the xScore margin instead of the actual margin.
Read MoreIn the previous blog, I compared Squiggle forecasters’ actual margin prediction MAE results with a distribution of potential MAE outcomes from the same forecasts across 10,000 simulated 2024 season as one way of untangling the skill and luck portions of those actual results.
Those simulations require us to select “ground truth” for the underlying expected margin in each game. In the previous blog we used bookmaker data with an added random component of a Normal variable with mean 0 and standard deviation 8 as that ground truth.
Read MoreThe Squiggle website is a place where forecasters can post their forecasts for the winning team and winning margin, and provide probability estimates for upcoming games of men’s AFL football, and see how well or otherwise they perform relative to other forecasters. The only criteria for posting there is that the forecasters must have a history of performing “reasonably” well, and must not include any human-related inputs such as bookmaker prices in their models.
It’s been running since 2017 and, since 2018, has included a derived forecater, named s10, which is a weighted average of the 10 best Squiggle models, based on mean absolute margin error, from the previous season. The MoS model had been included in s10 in every year from 2018 to 2024, but will be absent in 2025 due to a relatively lowly 22nd place finish.
In this blog, among other things, I want to get a sense of the extent to which that apparently below-average performance might be attributed to skill versus luck.
Read MoreMore than once here on the MoS website we’ve looked at the topic of favourite-longshot bias (FLB), which asserts that bookmakers apply a higher profit margin to the prices of underdogs than they do to favourites. In one MoS piece (15 years ago!) I had more of a cursory look and found some evidence for FLB using 2006 to 2008 data, and, in another piece, a few years later I had a more detailed look and found only weak to moderate evidence using opening TAB data from 2006 to 2010.
At this point I think it’s fair to say that the jury is still out on FLB’s existence, and waiting for more convincing evidence either way (and very unhappy at having been sequestered for 13 years in the meantime).
Read MoreBookmakers, love them or lose to them, are good at their basic job, which is accurately estimating the probability of outcomes, and they give clues about their probability estimates in the prices they set. The problem is, those clues are cloaked in profit.
Read MoreThere are no doubt a number of viable ways of doing this, but one obvious approach is to fit a logistic equation of the form shown at right.
This provides an S-shaped mapping where estimated win probabilities respond most to changes in expected margins when those margins are near zero. It also ensures that all estimated probabilities lie between 0 and 1, which they must.
I’ve used this form of mapping for many years with values of k in the 0.04 to 0.05 range, and have found it to be very serviceable. I’ve also previously fitted it to bookmaker data and found that it generally provides an excellent fit.
Read MoreAlmost 10 years ago I wrote a blog that, among other things, noted that the score progressions - the goals.behinds numbers at the end of each quarter for both teams - were unique for every game ever played, regardless of the order in which you considered the two teams’ score progressions, home first then away, or away first and then home, choosing at random for every game. At that point, the statement was true for 14,490 games.
It seemed pretty startling then but, as of the end of 2024’s Round 9, the statement is STILL true, and that’s now for 16,487 games. V/AFL games remain as snowflake-like as ever.
Read MoreWith the 2024 Men’s AFL season just weeks away, I thought it timely to look at different perspectives of how teams have historically performed in home and away seasons from one season to the next.
Read MoreI was thinking about the Strength of Schedule metric used in this blog from yesterday, and it struck me that, rather than using the raw values of the opponent team’s MoSHBODS rating and (for some metrics) the net Venue Performance Values (VPVs) for a game, we could, instead, convert these numbers into a win probability, which might make the resulting aggregate Strength of Schedule value more readilly interpretable.
Read MoreThe men’s AFL fixture for 2024 was recently released and, once again this year, we’ll analyse it to see what it means for all 18 teams.
Read MoreThe men’s AFL fixture for 2023 was released earlier this week, and tradition requires that the MoS website publishes its assessment of which teams fared best and which worst in that fixture given what the MoS models think about relative team strengths and venue effects.
Read MoreThis week I’ve been investigating the use of the Skellam Distribution in modelling AFL scores. That distribution can be derived as the difference between two, correlated, Poisson variables, which potentially makes it useful in an AFL context for modelling differences between team metrics.
Read MoreEvery year there’ll be a game or twelve where the fans of the losing team lament a lop-sided freekick count and largely attribute their team’s loss to the ineptitude (or worse) of the officiating. This will especially be the case where the visiting team has been on the thin end of the freekick count.
Read MoreCommentators are keen to point out how especially important they feel are the minutes just before a change - especially before the final change - so today we’re going to investigate how our in-running estimate of a team’s victory probability at three-quarter time should be influenced by any streaks of scoring leading up to the break.
Read MoreI thought that this year I might not have time to perform the traditional Strength of Schedule analysis, having spent far too long during the off-season re-optimising the MoS twins (more on which in a future blog), but here I am on a rainy night in Sydney with a toothache that a masochist would label ‘unpleasantly painful’ and the prospect of sleep before tomorrow’s 11:30am dental appointment fairly remote. So, let’s do it …
Read MoreA few years back, I authored a blog post in which I deftly presented the case for the superiority of MAE over Accuracy for identifying the most talented forecasters.
Read MoreLast season, we had the first partially player-based forecaster, MoSHPlay, which provided forecasts for game margins and game totals, and estimates of home team victory probabilities. It performed reasonably well in a year that was, in many ways, completely unlike those it had been trained on.
It used as inputs the margin and team score forecasts of MoSHBODS, and player ratings derived from historical SuperCoach scores.
In this blog I’ll take you through the process of reviewing the existing MoSHPlay forecasting equations, and investigate the efficacy of deriving player ratings from AFL Player Rating scores rather than from SuperCoach scores.
Read MoreIt feels slightly surreal that it’s only been a little over 12 months since last we did it, but it’s time to, once again, review the schedule for the men’s footy season ahead.
Read MoreMAFL is a website for ...