Thursday, 17 May 2012

Notes: Hot Cats'n'Pies

No... not 'Hot Cats in Pies'. That's just wrong.

With the Grand Final Replay Geelong v Collingwood coming this week, its a good opportunity to review the teams... but not as a preview to the clash, but a comparison of the teams past.

This was inspired by a new reader and twitter follower John (@Magpies59Fan), who was interested in our rankings points determinations and how Collingwood had achieved the highest ranking on record (by the FootyMaths Institute system).
He considered the great Geelong run of 2007-08 and thought they would be better than the Collingwood team of 2010-11.

He also further detailed his thinking as per the below long form tweets, and using calculations around average winning margins, he determined that Geelong should have achieved a higher level of ranking points than Collingwood.

John's logic is as here:   
Cats between 2007-09 lost 2 games in 44, C'wood from 2010-11 lost 2 games in 37 with 2 draws. Surely Geelong would have a higher ranking over that period than C'wood. Thinking about it more thou, what was Geelong ave winning margin vs expected in than period (44 games) compared to C'wood ave winning margin vs expected in the 37 games. Maybe that is why C'wood have ended up with a slightly higher ranking after R23 2011.
C'wood R11 2010-->> R23 2011 Ave winning mar 47.11pts 1743pts from 37 games. Geelong R6 2007-->> PF 2008 Ave winning mar 48.32pts 2126pts from 44 games. While it very close to me that means Geelong should have achieved a higher ranking (just) than C'wood did.
Looking at our rankings system, it is worth noting that they don't use winning margins only as a determinant. The calculations project a winning margin (as we post each round), and the actual winning margin is used in tandem with the projection. the ranking is then adjusted on outcome vs projection... such as
 - if an actual winning margin is greater than projected:
     ranking points are added.
 - if an actual winning margin is less than projected:
     ranking points are deducted.
(see also footnote)
Therefore, it is not only winning margins that are the be all and end all under our mechanism.

Collingwood 2010-11 and Geelong 2007-08
But back to the two teams in question, Geelong 2007-08 and Collingwood 2010-11.
We have laid out the teams matches near the foot of the blog (for those interesed in studying the timeline and points changes), but to summarise...

 - Geelong started 2007 at 1098pts,
   peaked at 1449pts after the 2008 Qualifying Final win vs St.Kilda
   a net gain of 351pts

 - Collingwood started 2010 at 1175pts,
   peaked at 1454pts after the Round 24 win vs Fremantle
   a net gain of 289pts

Graphically represented, both Collingwood and Geelong's ranking points over the selected years is as below.

So with only 5 rankings points between the peak ranks of these two teams, it would be hard to call one more superior than the other. But that said, the Geelong run in 2007 of 15 consecutive weeks gaining rankings points might just tip the balance their way.

That's 15 wins either against the predicted tip, or where they won over the calculated margin for 15 consecutive gains of ranking points (during the period 6 May 2007 to 19 Aug 2007). Average winning margin: 50.1 points. Which you could pretty much call a domination over opposition by Geelong.

The Hard Data
Full list of match games, and rankings point adjustments here.

The system also works for teams losses as well...
 - if a losing margin greater than predicted
     ranking points are deducted.
 - If a losing margin less than predicted:
     ranking points are added.


  1. Couple of comments.

    I believe you added GCS at 1000, and they dropped to 500. Although most of that change will occur on their side of the ledger, it probably added a couple of hundred ranking points to all the other sides (maybe 10-20 per team).

    The margin of error is smaller than the margin of the two teams (effectively 0.5 pts). Moreover, it is sensitive to initial rankings. If you manually reset both to (some score) and reran their runs of good form, you'd get a different result. Although the ratings try and correct for this, Geelong, having started lower, are at a disadvantage.

    1. Interesting point about the effect of the Suns introduction... must see if there is any bump up of rank points with the introduction of the other teams over the seasons. Though...
      1) I am pretty sure other new teams (eg Bears, Eagles in the 80's through to Richmond, University in 1908 etc.) had won more games in their first season than the Suns (twice a many).
      2) If the increase per team is 10-20pts, I might have to measure a combined teams increase (non new teams) and perhaps average them to balance out the differences in competing teams.

      Not sure if I will spend time on your second paragraph... the file with +14000 games and innumerable calculations just drives me batty.
      Though it is a good idea for a follow-up topic!

    2. If you go to the ratings page and get the average of the non-expansion teams on each row you can see the jump, if any from season to season. It only matters if there is some transfer from one (bad) team to the rest of the league, but I suspect there was some last season. (The alternative fix is to start GCS on 500 - actually all expansion teams on their year end rating, although that drags the average down as they improve).

      On the second it is relatively easy, on the ratings page, manually change the cell(s) you want to start the test from, and read the new value. All it really shows is how sensitive ratings are to old results, rather than a "correct" rating. My guess is 10-30 points over two years.

    3. Looks like another few blog posts... 1) the effects on rank pts when a new team/s enter, and 2) adjusting the initial rank pts of the two in this post for another comparison.

      As to the Suns, I started them at 1000, but come season end they were ~550. This season, I have started the GWS at 500, as it makes more sense in the predictive/tipping aspect of the blog... come seasons end, I intend to re-set them to 1000. This will change all the ratings for this season.

      Finally, yes it is rather simple to set a common points number and see the result... just each entry of a score takes the poor old pc about 40sec to process!