Page 2 of 2

Re: Adjusted Player Pairs

Posted: Sun May 29, 2011 10:55 pm
by J.E.
multi year (2008-2011). No playoffs. Offensive lambda = 4000

http://stats-for-the-nba.appspot.com/offensive_pairs

Nash shows up at the top quite a lot of times. Nash and Vince Carter are one of the top underperforming duos though.
Eddie House and Ray Allen surprised me at 4
Chris Paul and Stojakovic head and shoulders above everyone else, too bad they broke up (although Stojakovic probably has no reasons to complain)
Kevin Martin and Chuck Hayes at #23 is interesting
Brandon Bass and Jason Kidd seemed to work well together
Amare Stoudemire + Lou Amundson, Felton + Toney Douglas, Mo Williams + Shaq, Pierce + Sheed, Prince + Villanueva, Joe Johnson + Hinrich, Jet + Caron Butler, Harden + Ibaka were horrible duos.

Re: Adjusted Player Pairs

Posted: Mon May 30, 2011 12:15 am
by Crow
Thanks very much for all the new data.

I'll look at the multi-season stuff and may comment on it later after I spend time with it.

On the defensive stuff, I'll just briefly say that among those top pairs with a +1.5 rating or better and a rating improvement of 1000 places or more over the sum of the individuals it appears that interior and perimeter players are represented roughly in proportion to the their court time (close to 40% interior and 60% perimeter). But among the very worst pairs (worse than -1.5 and with a drop of 1000+ spots for the pair together ranking vs the sum of the individuals, perimeter players are moderately over represented and interior players moderately underrepresented compared to their assumed court time share.

Re: Adjusted Player Pairs

Posted: Mon May 30, 2011 7:00 pm
by Crow
On the multi-season offensive pairs, I looked briefly at the Lakers.

The five pairs that were at least +0.5 together which improved the most on pair rank over the sum of individuals from the best downward were:

Fisher + Brown
Gasol + Brown
Fisher+ Bynum
Odom + Gasol
Bryant + Odom


The four pairs that were at least -0.5 together which declined the most on pair rank over the sum of individuals from the very worst upward were:

Artest + Odom
Bynum + Blake
Odom + Brown
Artest + Brown

How much it means would be a topic of further review but that is what this metric estimates.



There is a nearly endless range of ways to work with this wider set of Adjusted +/- data in a more systematic and comprehensive way, looking for potentially useful clues while mindful of the data's imprecision.

Re: Adjusted Player Pairs

Posted: Sat Jun 04, 2011 7:36 am
by Crow
For the 4 conference finalists:

Offensive pairs better than +1.5

Bulls 0
Heat 3
Mavs 2
Thunder 3

Defensive pairs better than +1.5 for them

Bulls 6
Heat 3
Mavs 6
Thunder 6

All the pairs and just these few estimated strongest ones. Is this worth something or just noise? Up to the reviewer. If they see this data or more.

Re: Adjusted Player Pairs

Posted: Wed Mar 26, 2014 12:45 pm
by J.E.
Revisiting an old thread with some new results which are, unfortunately, a little disappointing

Back then I wrote
J.E. wrote:I think the (extremely obvious) next step would be to use (RatingP1+RatingP2)/4, with RatingPx derived from standard RAPM, as a prior for every pair. If any lambda other than infinity turns out to be optimal it means we improved on test set performance. (Because lambda == infinity would lead to the exact same test-set-prediction-numbers as standard RAPM)
I finally got around to testing this out. Lambda indeed turned out to be infinity (i.e. extremely high), strongly hinting at the fact that we don't gain anything from running any kind of pair-RAPM analysis. Essentially, this means whenever you want to predict the performance of a pair you can simply add up the RAPM of the two players.

Here's the test I ran in a little more detail: Split up the data in two halves, use one half to compute player ratings, then compute OOS error for the other half. The methods I tested were:

1. standard RAPM
2. pair RAPM, no priors
3. pair RAPM with (RatingP1+RatingP2)/4 as a prior for each pair (see quote)
4. pair RAPM with (RatingP1+RatingP2)/4*factor (with factor < 1) as a prior for each pair
5. average of the forecast from 1. + 2.

Now, 2. through 5. don't perform worse than 1., but they don't perform better either. Given that pair analysis makes things more complicated (more variables) and we don't gain anything from it it's probably best to not do it

If anybody has some more ideas on how to design an analysis/regression that could potentially find that pairs *do* perform better/worse than the sum of its' parts, please tell me.


I think one potential reason why we don't see such an effect is that coaches already have a very good grasp regarding which players work together and which don't - e.g. they don't put 5 centers on the floor at the same time(if they did, pair/triplets/etc. analysis would very likely be more helpful in OOS prediction)

Also, I can't say that I'm very surprised. When I'm glancing over the raw pair data on my website (like here, here and here) I rarely see a highly ranked pair where I say to myself 'it absolutely makes sense that these two work well* together because ..'
*better than the sum of their parts

Also, it seems, when looking at a teams' top 15-20 pairs most players tend to be either part of predominantly 'bad' pairs or predominantly part of 'good' pairs. But it rarely happens that a player is part of some very good pairs and, at the same time, part of very bad pairs. If the latter was more the case we'd, again, probably see better OOS prediction with pair analysis.

Re: Adjusted Player Pairs

Posted: Wed Mar 26, 2014 3:05 pm
by xkonk
Unless I'm misunderstanding the set-up of your analysis, you basically have a regression where for methods 1 and 2 you have predictors of 'player 1' and 'player 2', and in method 3 you have predictors of 'player 1', 'player 2', and '(player1+player2)/4', right? As far as I can tell, that third term is redundant with the first two. Is there any reason to assume that the regression would have 'chosen' to put weight on the third or summation term instead of the first two? If you're basically looking for an interaction effect, wouldn't you enter the interaction of player1*player2 instead of adding them?

Re: Adjusted Player Pairs

Posted: Wed Mar 26, 2014 3:35 pm
by J.E.
xkonk wrote:Unless I'm misunderstanding the set-up of your analysis, you basically have a regression where for methods 1 and 2 you have predictors of 'player 1' and 'player 2', and in method 3 you have predictors of 'player 1', 'player 2', and '(player1+player2)/4', right?
(1) has completely seperate predictors for 'player1' and 'player2'. (2) has predictors for 'pair_p1_p2' only. In (3) I also have predictors for 'pair_p1_p2' only, but the ratings of 'player1' and 'player2', taken from (1), serve as a prior for that pair.

To give an example
(1) outputs simple RAPM player ratings like these. Iguodala has a +6.3, Curry +4.3, Lee +0.2
(2) doesn't even have 'Iguodala' as a variable, but it has 'Iguodala+Curry', 'Curry+Lee' and 'Iguodala+Lee' as variables. The obvious problem here being that a pair that has rarely played together will get a rating close to 0, even if the pair is 'Iguodala+Curry'
(3) doesn't have 'Iguodala' as a variable either, but it has 'Iguodala+Curry' as a variable; plus, it 'knows' that Iguodala and Curry have a positive RAPM and assumes the pair to have a rating that is close to +6.3+4.3 (that's the prior for that pair)

Re: Adjusted Player Pairs

Posted: Thu Mar 27, 2014 4:56 am
by xkonk
I see. So the new models essentially only have the interactions and no main effects, and there's no improvement even if you give the pairs a prior. That is kind of disheartening. Priors seem to help standard RAPM quite a bit; any guess as to why it doesn't for the pairs?

Re: Adjusted Player Pairs

Posted: Thu Mar 27, 2014 8:52 am
by J.E.
xkonk wrote:Priors seem to help standard RAPM quite a bit; any guess as to why it doesn't for the pairs?
The priors I create for standard RAPM come from the BoxScore, so a different dataset, if you will. The priors I create here come from the same lineup data that gets used when computing the final pair ratings - I'm basically using the same data twice. I think it's understandable that priors from a different dataset are more helpful

Tried with interaction and main effects this time, so each player was a variable and each pair was a variable (all dummies), and tried different lambda values for the different columns (whether it's a 'pair' column or a 'player' column) - still no improvement

Re: Adjusted Player Pairs

Posted: Thu Mar 27, 2014 10:10 pm
by xkonk
Ah. I kind of assumed you used the previous year's standard RAPM (e.g. Iggy 2012 + Curry 2012) as the prior for this years pair (Iggy +Curry 2013). Because using the previous year's RAPM as a prior helps as well, right? (which is what I meant more than the boxscore version).