An Analysis of Strength of Schedule for the Men's 2026 AFL Season

The men’s AFL fixture for 2026 was released this last* week and, as is tradition, we’ll analyse it to see which teams we think have done better or worse than others.

In particular, we’ll look for answers to two questions:

  1. Which teams fared best and which worst in terms of the overall difficultly of the teams they face in their schedule (the Strength of Schedule analysis) before and after adjusting for venue effects? Our measure here will be the MoSHBODS Combined Ratings of a teams’ opponents, which we’ll adjust for venue effects for the supplementary analysis

  2. Which teams fared best and which worst in terms of the matchups they missed out on given that only 23 games out of a possible 34 all-plays-all fixture are played (the Impact of Missing Schedule analysis)? Our measure here will be how much more or less likely would each team be to finish in the Top 10 ladder positions were the missing parts of the fixture actually played.

(* the final set of simulations took days to run - I won’t get into why)

Read More

Building Your Own Men's AFL Game Score Progression Simulator

In this (long) blog I’ll walk you through the concepts and R code behind the creation of a fairly simple score progression simulator.

(There’s a link for you to download the entire code yourself at the end of the blog.)

All we’ll be interested in are “events” - period starts, period ends, goals and behinds - and the algorithm will determine for us, given the event that’s just occurred, what the next event is, and how far away in time it will take place.

To be able to do that, the first thing we’re going to need is some data about the typical time between events based on historical games, which we can obtain using the Swiss Army knife of footy data, the fitzRoy() R package.

Read More

How Many Disposals Do You Need to Get the Coaches' Attention?

In the previous blog we investigated the differences between coaches and umpires in the player statistics they appear to take most notice of when casting their respective player-related votes.

We found some similarities (both are very influenced by disposal counts), and some differences (coaches are more influenced by whether the player is on the winning or losing team), but one thing we didn’t investigate was the specific nature of the relationships between individual player metrics and voting behaviour. For example, we know that disposals are an important metric in determining Brownlow and Coaches’ votes, but we don’t know exactly how the number of votes that a player receives varies as the disposal count changes.

Read More

Do Umpires and Coaches Notice Different Things In Assigning Player Votes?

At the conclusion of each game in the men’s AFL home and away season, umpires and coaches are asked to vote on who they saw as the best players in the game. Umpires assign 3 votes to the player they rate as best, 2 votes to the next best, 1 vote to the third best, and (implicitly) 0 votes to every other player. It is these votes that are used to award the Brownlow Medal at the end of the season.

Similarly, the coaches of both teams are asked to independently cast 5-4-3-2-1 votes for the players they see as the five best, meaning that each player can end up with anywhere between 0 and 10 Coaches’ votes.

The question for today is: to what extent can available player game statistics data tell us whether and how coaches and umpires differ in how they arrive at their votes.

(Note that we’ll not be getting into the issue of individual umpire or coach quirks, snubs, or biases, and instead be looking at the data across all voting umpires and coaches.)

Read More

Measuring Strength of Schedule in Terms of Expected Wins

A few weeks back I analysed the men’s 2025 AFL schedule with a view to determining which teams had secured relatively easier overall fixtures, and which had secured relatively more difficult overall fixtures.

We investigated various approaches there and reached some conclusions about relative team fixture difficulty, but none of the methods provided an intuitive way to interpret the outputs.

On a related note, this week I had a kind email from a reader who suggested that there might be an opportunity to continuously update teams’ ‘fixture difficulty rating’ (which is just another term for strength of schedule) during the season, as this service was frequently provided by various fantasy leagues for English Football and other sports.

All of which got me to revisiting my strength of schedule methodology.

Read More

Simulation Replicates and Returns to a Perfect Model

The Situation

We’ve built a model designed to estimate the probability of a binary event (say, for example, the probability that the home team wins on the line market in the AFL).

It’s a good model - very good, in fact, because it is perfectly calibrated. In other words, when the true probability of an event is X% it’s average estimate of the probability of that event is X%.

Those probability estimates, however, are the result of running some simulation replicates with a stochastic element, which means that those estimates will diverge from X% to an extent determined by how many replicates we run.

Read More