Blog Feed

Round 6, 2020 Predictions

In this round, I have assigned the home team in my model to the team that last played at the venue. I’ve had to do this since all my features in my linear/logistic model are subtractions of the away teams statistic from the home team. This has led to some strange tips, but let’s go with them…

Winning TeamOpponentDateProbability (%)Margin
GeelongBrisbane Lions09/07/2020543
HawthornCollingwood10/07/2020608
FremantleSt Kilda11/07/2020543
West CoastAdelaide11/07/20208125
MelbourneGold Coast11/07/20207015
EssendonNorth Melbourne11/07/20206310
Port AdelaideGreater Western Sydney12/07/20206713
RichmondSydney12/07/2020625
Western BulldogsCarlton12/07/2020648
Predictions for Round 6, 2020.

Round 5, 2020 Predictions

Winning TeamOpponentDateProbability (%)Margin
CarltonSt Kilda02/07/20206210
CollingwoodEssendon03/07/20207921
West CoastSydney04/07/2020624
GeelongGold Coast04/07/20208736
Western BulldogsNorth Melbourne04/07/20206611
Brisbane LionsPort Adelaide04/07/2020584
FremantleAdelaide05/07/20206413
RichmondMelbourne05/07/20207315
Greater Western SydneyHawthorn05/07/2020503
Predictions for Round 5, 2020.

Round 4, 2020 Predictions

Winning TeamOpponentDateProbability (%)Margin
SydneyWestern Bulldogs25/06/2020565
CollingwoodGreater Western Sydney26/06/2020602
Port AdelaideWest Coast27/06/20207525
RichmondSt Kilda27/06/20207414
EssendonCarlton27/06/20206913
FremantleGold Coast27/06/20205910
Brisbane LionsAdelaide28/06/20208229
GeelongMelbourne28/06/20207119
HawthornNorth Melbourne28/06/20206711
Predictions for Round 4, 2020.

Round 3, 2020 Predictions

Winning TeamOpponentDateProbability (%)Margin
RichmondHawthorn18/06/20208121
Western BulldogsGreater Western Sydney19/06/2020561
North MelbourneSydney20/06/2020679
CollingwoodSt Kilda20/06/20208322
Brisbane LionsWest Coast20/06/20206314
GeelongCarlton20/06/20208530
AdelaideGold Coast21/06/2020606
EssendonMelbourne21/06/20207014
Port AdelaideFremantle21/06/20206714
Predictions for Round 3, 2020.

Round 2, 2020 Predictions

Predictions for Round 2, 2020. Assuming match time is reduced to 80%. No changes to models. Foopy is back! I have been using machine learning to investigate other things which I will post about soon, stay tuned 🙂

Winning TeamOpponentDateProbability (%)Margin
RichmondCollingwood11/06/2020512
GeelongHawthorn12/06/2020609
Brisbane LionsFremantle13/06/20207720
CarltonMelbourne13/06/2020649
Port AdelaideAdelaide13/06/20207626
West CoastGold Coast13/06/20208633
Greater Western SydneyNorth Melbourne14/06/2020629
SydneyEssendon14/06/20206815
Western BulldogsSt Kilda14/06/202053-2
Predictions for Round 2, 2020.

Comparing Models (Linear, Logistic, SVM)

What’s New?

I was invited to join Squiggle, an online AFL ladder where mathematical models are compared using three metrics: correct tips (accuracy of win/loss prediction), margin mean absolute error (accuracy of margin prediction), and bits (accuracy of predicting the correct probability of winning). Essentially, we want to maximise correct tips, minimize margin mean absolute error, and maximise bits. I’m super excited to join the Squiggle competition, and can hopefully not come last in everything ;). The mathematics of Bits can be found here, but essentially they’re a measure of how accurate your model is at predicting the probability of a team winning. For example, if Richmond and Carlton were to play each other 100 times, you are trying to predict what percentage of games Richmond would win. Whereas tipping doesn’t account for this (simply a measure of predicting the winning team, but does not take into account the “confidence” in a tip).

What I have discovered recently is a few things. That my data contained duplicates of games, and it also didn’t contain all games from every season. After correcting for this, my model predictions have improved quite significantly. I also feature scaled each player’s stats over a game then took the average for both teams, instead of feature scaling over the entire sample. This was done in an attempt to increase the accuracy of who performed better out of two teams in a particular game.

Creating and Comparing Models

We left off with a linear regression model to predict margin, and a logistic regression model to predict the probability of winning. I decided to compare the predictions of both models, as well as a simple support vector machine for classification, using a radial basis function (Gaussian) kernel.

Our entire data set contains the previous 3000 games played (approximately from 2004 to 2019). I split the data set into two subsets, a test set (20% of the most recent games, i.e. all games from approximately the beginning of 2017 to the present) and a training set containing the remaining games (80% of the original set). I then used a shuffle split method to optimize hyper parameters for each model, where for each hyper parameter value, the accuracy was measured by averaging over 100 shuffles. This resulted in consistent hyper parameter values for each model. Note: Technically, whilst optimizing hyper parameters, I split the training set into two disjoint sets for each iteration, one to fit the model (75% of training set size), and one to cross-validate (25% of training set size).

From the linear model, the probability of winning was calculated by assuming that the residuals are normally distributed, with the mean set equal to the expected margin, then simply calculating the CDF to give the probability of winning.

See Table 1 below for comparisons between models. Note: The 2019/18 seasons lie in the test set, and are disjoint from the training and CV sets.

We see that the logistic model is slightly better than the linear model for both tips and bits. The SVC model appears to be only slightly better at tips (66.7%) on the test set than the linear and logistic models (66.2% and 66.5%, respectively), but significantly worse when it comes to bits.

ModelBits Test/TrainMAE Test/Train (Margin)Tips Test/Train (%)Bits 2019/18MAE 2019/18 (Margin)Tips 2019/18 (%)
Linear71.41/419.4228.15/29.5866.2/70.116.44/35.4927.48/27.4063.3/68.1
Logistic74.39/426.36-/-66.5/70.317.04/36.66-/-64.7/68.1
SVC56.73/317.34-/-66.7/69.7512.83/28.64-/-64.6/70.0
Table 1: Comparing Models. over the past 3000 games. Training set = 2400 games, Test set = 600 games.

What is interesting to note, is that the MAE of the margin in the test set is slightly smaller than the training set. I wouldn’t expect this, but I’ve looked over the code a bunch and I’m confident the training and test sets are disjoint. This could possibly be due to MAE not being a good measure of the models accuracy, or the test set size, or simply (as pointed out by MoS) that some years margins are more predictable overall than others. Here’s an updated figure of the 2019 AFL Season predictions.

The vertical axis is the predicted probability of the home team winning estimated by the logistic model. The horizontal axis is the predicted margin (home minus away team score) estimated by the linear model.

I think from these comparisons, it is optimal for me to use the linear model for my margin prediction, and logistic model for probability of winning prediction in order to maximise bits. I will update my predictions using these two models now. It’s also interesting to note the performance of the SVM, and I’ll look into possibly improving it along with looking into other models.

Until next time. Have a peaceful day 🙂

Round 1, 2020 Predictions (v3)

Margin and probability predictions for Round 1, 2020 using a slightly updated model (scaling factor has been added due to a change in time per quarter). Have a peaceful day 🙂

Winning TeamOpponentDateProbability (%)Margin
RichmondCarlton19/03/20208530
Western BulldogsCollingwood20/03/2020532
EssendonFremantle21/03/2020542
AdelaideSydney21/03/202052-1
Port AdelaideGold Coast21/03/20208433
GeelongGreater Western Sydney21/03/2020522
North MelbourneSt Kilda22/03/20207214
HawthornBrisbane Lions22/03/20206611
West CoastMelbourne22/03/20208529
Predictions for Round 1, 2020.

Logistic Regression and Mistakes

So, once I had found some time outside of working on the thesis, I managed to set up a logistic regression model for estimating win probability for each game whilst also cleaning up the code. I also noticed my last model had an error (I’m very much used to making errors), and was not as accurate as I thought. This means I want both my models to be improved over the year. They’re not terrible, but they’re not the greatest, they’re at least “competitive”.

I’ll get into what has changed with the linear regression model for predicting margin, then the logistic regression model, the accuracy of both models, and what I’ll do next 🙂

What has changed…

So, it appears I gave my model training data that contained information from future games, and that’s been corrected for (yikes). I also changed the features to just make them more simplistic, instead of what I originally did which was create a function that took several statistics and used the output as a feature. So instead, the linear regression model has 23 features (including the bias feature x_{0}). The learning curves for the linear model are given in Figure 1

Fig 1: Learning curves for the linear regression model predicting margin. Training set (= 2000 games), CV and Test Set (= 500 games each).

Just as a reminder, the cost function is the square of the difference, with a regularization parameter optimised over the cross-validation set. The mean absolute error (MAE) on the test set is \approx 33.57, which I strongly want to improve (accuracy on previous seasons are discussed later). As can be seen in Figure 1, the learning curves converge to the same asymptote, with the error “high”, this implies that our model is experiencing high bias. I attempted to increase the number of features by taking a combination of their products and powers, but none of my attempts seemed to fix the high bias problem. I think the only way I can improve it is by getting new features. The two that I want to do first in particular are (1) team chemistry (determine the number of times each player on a team has played with all other teammates, then take the sum over all players), and 2) distance traveled (determine the distance between each venue, and either use the distance traveled since the last game or second last game).

Logistic Regression

Logistic Regression is used for classification problems i.e. where the outcome is discrete. In the case of an AFL match, the two outputs are a loss (denoted by a value of 0) and a win (denoted by a value of 1). In linear regression, our hypothesis (what is used to calculate our margin prediction) was given by h_{\theta}(x) = \theta^{T}x, in the case of logistic regression our hypothesis is given by the sigmoid function (with parameters \theta)

h_{\theta}(x) = \frac{1}{1 + e^{-\theta^{T}x}}

where \theta, x \in \mathbb{R}^{(n+1)} (with n features). The sigmoid function is used since it maps to the open interval (0,1), essentially returning a probability of the home team winning for a given input x with parameter vector \theta. In order for our cost function to have a global minimum (in order to determine the optimal parameters), we have to use an alternative to the square of the difference. Our cost function is given by (with regularisation parameter \lambda)

J(\theta) = -\frac{1}{m} \sum_{i=1}^{m}(y^{(i)}\log(h_{\theta}(x^{(i)})) + (1 - y^{(i)})\log(1 - h_{\theta}(x^{(i)})) ) + \frac{\lambda}{2m} \sum_{j=1}^{n} \theta_{j}^2

Similarly, our logistic model takes the same features as our linear model. Again, I chose gradient descent as the optimisation algorithm. The learning curves for the logistic model is given in Figure 2.

Fig 2: Learning curves for the logistic regression model predicting probability of winning. Training set (= 2000 games), CV and test set (= 500 games each).

After the optimal parameters are found, the hypothesis function h_{\theta}(x) returns the probability of the home team winning, given the features x \in \mathbb{R}^{(n+1)}. Similar to Figure 1, Figure 2 also shows strong bias, implying we need to increase our features!

Results

I ran both the margin and probability of winning models over the 2018 and 2019 season with up-to-date-round information.

For the 2019 season, the logistic model had 127 correct tips (61.35% accuracy) with the average on Squiggle (models only) was 135 correct tips (65.1% accuracy). The linear model had a MAE of 27.73, where the Squiggle average was 27.71. A scatter plot of probability of winning prediction vs margin prediction for the 2019 season is given in Figure 3.

Fig 3: 2019 Season

For the 2018 season, the logistic model had 144 correct tips (69.6% accuracy) with the average on Squiggle (models only) was 143 correct tips (69.1% accuracy). The linear model had a MAE of 28.86, where the Squiggle average was 27.35. A scatter plot of probability of winning prediction vs margin prediction for the 2018 season is given in Figure 4.

Fig 4: 2018 Season

What is interesting from both Figure 3 and Figure 4 is the small range in margin prediction for both seasons. Notice how the margin prediction for both seasons lie in the range of approximately (-25, 35) points whereas the high performing models like Live Ladders, Massey Ratings, and Squiggle have margin prediction ranges of approximately (-40, 50), (-45, 50), and (-50, 50) respectively (see the really beautiful Matter of Stats post). Another interesting property of both Figures 3 and 4 are the linearity of the two plots.

What next?

My results for both the 2019 and 2018 season are not as good as I would like them to be. But there’s still features to add in order to possibly improve them. After that is accomplished I will move onto creating neural networks for both margin prediction and probability of winning prediction.

Thanks for reading and have a peaceful day 🙂

AFL GO (version 1)

Introduction

Hey there! I’m Charles and I’m currently completing my Honours specialising in Astrophysics & Astronomy at the Australian National University, where I completed my BSc majoring in Mathematics and Theoretical Physics. Over the summer of 2019/20 I’ve been studying Machine Learning in my spare time and I thought it would be nice for one of my first projects to be modelling the outcome of AFL games. Over the past number of years I’ve briefly followed the AFL modelling league Squiggle, and ever since I have been very impressed with how smart and clever everyone on there is. The name of my model(s) is AFL GO (AFL Gadget-type Operator), taking inspiration from the big BT.

Anyhow, let me explain my goals and what I have done thus far.

The goal of this project is to begin “small” and work up. I plan to begin by using a linear regression model to predict margin (difference between home team and away team score), then logistic regression to predict outcomes (win/lose), and eventually using neural networks. At the present moment I have only created a linear regression model to predict margin, and from the same model somewhat poorly predict the probability of winning/losing.

Linear Regression

In order to create an “accurate” linear regression model for margin prediction, one requires “enough” historical data points that includes all the features/inputs (i.e. all measurements we will use to predict a margin e.g. team rating, recent performance, etc) where we will denote a single input sample as x^{(i)} = (x_0^{(i)}, x_1^{(i)}, \ldots, x_n^{(i)}) (where x_0 is the bias variable), and the corresponding margin y^{(i)} = (Home Team Points) - (Away Team Points).

Historical Data

All of the historical data used in creating this model originated from AFL Tables and was obtained using fitzRoy (this person is a legend), and contained the statistics of every player for every game played between the year 2000 to the present (\approx 3000 games). There’s nothing particularly special about this time period, it was simply chosen since I wanted to maximise the amount of data that I had, and because some of the features that I chose weren’t available from years prior to the year 2000.

Features

Choosing the features/inputs to use seems somewhat hand-wavy, and it would be very ideal if I had an AFL professional explain to me what they believe are important features in determining the margin of a game. Since I don’t have that privilege, I’ll make some hand-wavy guesses of some important features:

  • Home ground advantage.
  • Long-term rating of a team.
  • Short-term win/loss performance.
  • Short-term team performance in particular statistics (attacking, defending, etc).
  • Distance traveled between games.
  • Number of days between games (recovery).
  • Bye–round (breaking momentum?).
  • Time spent playing together (chemistry).
  • Short-term individual performance.

The n=9 (relatively simple) features x=(x_0, x_1, \ldots, x_n) that I have chosen to begin with are the following:

  • (1) Difference between a rolling Elo rating over the past 20 games (long-term rating).
  • (2) Difference between the number of wins over the past 5 games.
  • (3-8) Difference between the average team statistic over the past 5 games in the six distinct statistics (scoring, contested, uncontested, defense, possession, attack).
  • (9) Home ground advantage (absorbed in the bias parameter \theta_0).

where the “difference” is the home team statistic minus the away team statistic.

Features 3 through to 8 are the difference in the average team based statistics over their previous 5 games. They are functions of the following:

  • Scoring = Goals, Behinds, Goal Assists
  • Contested = Hit outs, Tackles, Contested Possessions, Contested Marks
  • Uncontested = Uncontested Possessions
  • Defense = Rebounds, One Per-centers
  • Possession = Handballs, Marks, Kicks
  • Attack= Inside 50s, Clearances, Marks inside 50

Some of these are scaled appropriately according to the Supercoach Rating formula used for players.

Elo Rating

An Elo rating of a team is simply a measurement of how good that team is based on the outcomes of their previous games (win, draw, or loss) and the Elo rating of the corresponding opponent.

For example, say Team A and Team B have Elo ratings E_A and E_B respectively. Before the game is played, the predicted outcome for team A (denoted by P_A, also known as the probability of team A winning) is calculated using the logistic function, i.e.

P_A = (1 + 10^{(\frac{-(E_A - E_B)}{400})})^{-1} \in (0,1)

and the probability of Team B winning is given by P_B = 1 - P_A. So, if the rating of Team A (E_A) is much larger than the rating of team B (E_B) then the probability of team A winning (P_A) approaches 1. Similarly, if the rating of team A is much smaller than the rating of team B then the probability of team A winning approaches 0.

In this example, the two teams play one another, and by the end of the game some result occurs for Team A and Team B (either a win = 1, draw = 0.5, or loss = 0), denoted by R_A and R_B = 1 - R_A respectively. The two ratings of Team A and Team B are then updated

E_{A, new} = E_A + k(R_A - P_A)
E_{B, new} = E_B + k(R_B - P_B)

where k is a parameter, which will be optimised in the future, but for now is set to k=20.

Initially, all teams have the same rating of 1500 and for each game played between two teams, their respective Elo rating’s are adjusted. As more games are played, the Elo rating’s approach their “true” value. If we’re trying to use Elo as an accurate measurement of a team’s long-term rating, if too few games are played then the ratings will not be accurate due to a lack of playing all teams, and if too many games are played then the rating will not be accurate since team lineups and performance change over several seasons. Due to this, for any given game in our historical data, the Elo ratings of both teams are calculated over their previous 20 games (varying this parameter will be investigated in the future).

Cost Function and Optimisation

The cost function J(\theta, \lambda) chosen for optimising our parameters is given by

J(\theta, \lambda) = \frac{1}{2m} \sum_{i = 1}^{m} (h_{\theta}(x^{(i)}) - y^{(i)})^2 + \frac{\lambda}{2m} \sum_{j=1}^{n} \theta_{j}^2

where m is the number of sample points, x^{(i)} \in \mathbb{R}^{n+1} is the i-th sample input, y^{(i)} \in \mathbb{R} is the i-th sample output, \lambda \in \mathbb{R} is the regularisation parameter, and \theta \in \mathbb{R}^{n+1} is the parameter vector we want to optimise. The linear hypothesis h_{\theta}(x^{(i)}) is given by

h_{\theta}(x^{(i)}) = \theta_{0}x_0^{(i)} + \theta_{1}x_1^{(i)} + \ldots + \theta_{n}x_n^{(i)} = \theta^{T}x^{(i)}

Now, we have approximately 3000 games worth of data. I’m going to use 2500 in total (so that I can still calculate an appropriate Elo rating for each game). We will randomly assign all 2500 games to three sets: a training set (containing 1500 games), a cross-validation (CV) set (containing 500 games), and a test set (containing 500 games), such that any two sets are disjoint.

The training set’s purpose is to optimize the parameter vector \theta for a given regularisation parameter \lambda (who’s purpose is to prevent overfitting). The CV set is used to measure which combinations of (\theta, \lambda) produce the smallest error. The test set is used to measure how accurate our model (with optimal parameters) is on data it has never seen before. It must be kept in mind that since the data in each set is randomly chosen, if I re-randomise the sets, my model will optimize at different parameters but the change is basically negligible (I might discuss this in a future post).

So, given m data samples (x^{(i)}, y^{(i)})_{i=1}^{m} (i.e. each game has its features stored in x^{(i)} and margin y^{(i)}) we would like to choose the parameter vector \theta that minimises the cost function J(\theta, \lambda). Essentially, a classic calculus minimisation problem. It can be analytically solved quite fast using a computer since we have only a few parameters, but we do it numerically using gradient descent (I won’t explain it, but it’s described here).

We plot the learning curves for the training, cross-validation, and test set in the figure below. Without going into too much detail, since these learning curves converge it implies that our current model is void of over-fitting.

undefined

I’m actually not sure whether it matters whether or not these learning curves cross (I believe it’s simply by chance).

From the calculated parameters, I tested the model by tipping the team with predicted margin greater than zero, and it has a 71.3% accuracy on the training set (biased), 68.4% on the CV set, and 68.8% on the test set.

Margin Prediction and Probability of Winning

We now have the optimised parameters \theta and can use this model to predict a games margin simply by measuring the 9 features x^{(j)} mentioned earlier and plugging it into our linear hypothesis h_{\theta}(x^{(j)}) = \theta^{T}x^{(j)}.

In order to calculate the probability of winning, I can think of two possible methods: 1) Simply use the Elo rating we calculate for both teams and using the logistic function to calculate a prediction probability, or 2) Assume our data is normally distributed about our model i.e. has a mean \mu = h_{\theta}(x) and a standard deviation \sigma that can be calculated from our cost function (turns out to be \sigma \approx 39 points), and calculate Pr(M > 0) (where M is the margin). I’m honestly not sure which one I should choose in the meantime, so the probabilities for Round 1 on my Twitter are from using the Elo rating prediction.

In the future, I plan on using logistic regression with similar features in order to create a separate model to predict probability of winning.

Conclusion

Well, that’s about it. There’s so much I don’t know about statistics and Machine Learning, but I guess that’s why I’m doing this project. If you have any features that you think would be influential for my model please let me know! Very soon I should have my ladder prediction out (hopefully it looks somewhat beautiful) along with my weekly predictions. Thank you for reading and I hope you have a peaceful day 🙂