Crow wrote:So in general models do only so so immediately but tend to look better after a long time? 2002 was the worst draft for models? How about GMs? Would be nice to compare their performance that year and every year to the available models.
They have the actual draft order listed, which is what the GMs did.
It looks like the consensus is the best performing, well above the actual draft order every year.
Developer of Box Plus/Minus
APBRmetrics Forum Administrator Twitter.com/DSMok1
So they do, thanks. I missed it in the middle of the second line of the key. One of ten lines, not mentioned in graph title or in discussion to my quick read. Tiny on my phone screen, but it is there. My oversight.
Best twice, tied for best once (including consensus, Daniel, if I read right, so many shades of green and blue make it a bit hard), but all long ago.
Bettered by one or more model all other years. Worst twice. Not that impressive for full-time insiders with 6-7 digit research budgets and years of study into each draft class.
While I without a doubt agree that the models perform better than GMs, one thing to keep in mind is that the models are all using "future" data to make these predictions. A more fair method to compare with GMs would be to only use prior years to make a particular years projections. It would be interesting to see how much this would change things. I imagine more recent years would be affected minimally whereas years further ago might see some notice me change from smaller data sets.
Related: I think in some ways this could actually benefit models to incorporate recency more (to pick up playing style changes over the year) - for example 3s in the 90s are likely a worse predictor than in recent years. I may attempt to incorporate this a bit in my next iteration of model improvements.
jessefischer33 wrote:While I without a doubt agree that the models perform better than GMs, one thing to keep in mind is that the models are all using "future" data to make these predictions. A more fair method to compare with GMs would be to only use prior years to make a particular years projections. It would be interesting to see how much this would change things. I imagine more recent years would be affected minimally whereas years further ago might see some notice me change from smaller data sets.
Related: I think in some ways this could actually benefit models to incorporate recency more (to pick up playing style changes over the year) - for example 3s in the 90s are likely a worse predictor than in recent years. I may attempt to incorporate this a bit in my next iteration of model improvements.
Very good point. We've got future data involved here, and it is also likely that the draft models were trained on some of the years shown here--so it is an "in sample" prediction.
Developer of Box Plus/Minus
APBRmetrics Forum Administrator Twitter.com/DSMok1
In sample colors current findings. But GMs had access to non public information (workouts, interviews, medicals) and not very publically known data (all games, high school, aau, showcases, practices, off court) and had the ability to influence performance via coaches (minutes, situations, shots, when removed, etc.) So it is not a fair fight both ways.