Crow wrote:I'll have to refresh my knowledge of PIE but my memory is that it is weak sauce, worse than the other better known weak box score metrics.
Crow wrote:Yep, weak. PIE may be an improvement over "NBA efficiency" but that was one the simplest, weakest metrics out there for evaluating players. Correlation with team results is another very different thing. So it depends what you are trying to do. But with its simplistic weights and major omissions I doubt that it is very good or even competitive at that either.
That's what I would've thought, too, but I was surprised by how good the site's handicapping track record was.
Our college basketball model, the CBB Lockatron, formulated projections for just under 2000 college basketball games during the 2016-2017 season. It was its first season and with almost 2000 projections in the books, there's a lot of encouraging data and profit to look back on.
When the model's projection differed from the spread by at least 5 points, the model went 223-180 (55%) against the spread.
The model was most profitable when identifying value in the underdog. When the model liked the underdog to cover and the projection differed from the spread by at least 4.9 points, it went 149-103 (59%). The model also tracked public betting percentages. If you "faded the public" and only took the aforementioned plays (underdogs at 4.9+) that 40% or less of the public was on, the model's record was 62-38 (62%). At the very top segment of projections (the ones differing from the spread the most), the model was spot-on with underdogs. At a difference of 6.7 or greater, it went 68-45 (60%) with underdogs. Again, fading the public (40% or less) improved this record to 33-16 (67%).
When the model found significantly more value in the favorite's spread (5.2 difference or greater), it had moderate success, as long as it wasn't a play the public was absolutely hammering (85% action or less on the favorite). Its record was 73-62 (54%).
Not many CBB models can claim ATS (against the spread) win rates of 59%+ at their highest value points. We're excited for the 2017-2017 CBB season!
...
The NBA Lockatron formulated projections for just under 500 games during the 2016-2017 season. When projections differed enough from the spread to meet optimal criteria for underdogs or favorites, the model has spit out plays at a 65% win rate (50-27), and went on a 26-11 run against the spread from 03/10/17 through 04/19/17, before finally slowing down in the playoffs.
The model was revamped during the off-season and is back for the 2017-2018 season with a vengeance. In the first couple weeks of NBA action, the model has already started 9-1 on picks at a 6.6+ value.
Perhaps similar or superior results can be had by using better metrics and using a similar betting method.
Sorry if this isn't appropriate content for this thread, I just figured some people might be interested as perhaps aspects of the methodology could be applied to win total predictions, and I was surprised by their use of PIE which I would have thought was a poor choice.
I am particularly interested in this quote:
A high PIE % is highly correlated to winning. In fact, a team's PIE rating and a team's winning percentage correlate at an R square of .908.
I don't know enough about statistics to evaluate this claim. Are there any similar R square correlation to win % evaluations of other all-in-one player metrics?