I already posted a table above that shows the math is wacky. According to JD Power the survey:
"measures satisfaction across five key factors (in order of importance): performance (26%); ease of operation (22%); styling and design (19%); features (17%); and cost (16%)."
So their methodology ranks cost last (the one category Samsung best Apple in) yet Samsung's overall score is better. How is that possible? I could maybe understand it if they raked cost as most important, but they didn't. If the overall score isn't based on the 5 factors what is it based on?
The problem stems from the fact that whomever created that table oversimplified AND incorrectly tallied how JD Power achieved their results. The math is wacky. Fact. That is not JD Power's math. Also fact. The creator of that chart assumed it was a straight 'add it up and divide by 5'.
The original Power Circle chart posted by the8thark and others clearly shows why your chart and the wacky math are invalid.
"*Please note that JDPower.com Ratings may not include all information used to determine J.D. Power awards."
If we don't know all the information we can't say their math (JDP) is wacky. It might turn out to be wacky, but that can't be determined by the chart you presented.
Last edited: