That article instantly got on my bad side by focusing on player rankings, and even worse calculating the standard deviation of the ranks. Bad statistical methodology. A better way of looking at how similar or dissimilar the player rating systems are to each other is to look at correlations, or possibly normalized measures (z-scores). Rankings are bad because the #2 and #3 ranked players could be really close to each other (maybe Kobe averages 27.9 points per game and Lebron averages 27.8 points), or far away (maybe the numbers are 27.9 and 24.9). Either way, it's much more informative and for almost all purposes more accurate to use the actual numbers (27.9, 27.8, 24.9, etc.) instead of the cruder and less informative ranks (#1, #2, #3).Crow wrote:Anybody have thoughts about this study?
When we're evaluating players, then except for specialized purposes such as handing out all-NBA awards and the like, ranks are not what we should focus on, instead we should focus on measures of player quality/ability. Ie. look at Win Shares, PER, WP48, etc. instead of the rankings that those measures produce.
If one truly does have to analyze data which are ordinal in nature, such as players' ranks, there are specialized order statistics that should be used, not the standard deviation.
Thumbs down on the article, I didn't bother to look at the results or conclusions in detail because I wouldn't believe them anyway. He evidently had nice data on WinShares, etc. and instead of comparing them in an interesting way, he frittered them away by doing the silly rankings thing.