I reverse-engineered their formula, and it's based straightforwardly on the scores and weights they listed, so it's not subjective. But the technical definition of the score doesn't match their goals very well, and I think it's not a good way to evaluate ISPs. They mentioned that they wanted to emphasize reliability metrics over average performance, and I think they just screwed that up.Is it just me, or do the differences between carriers seem so small to not justify the difference in scores. TMobile was not overwhelmingly dominant across categories in my mind to have that much higher of a score. Seems like they took objective measures and made them subjective........
The reason is that the relative differences in averages are much larger than the relative differences in reliability metrics. AT&T has ~8% more reliable >25Mbps down than T-Mobile, but T-Mobile has ~65% higher average down than AT&T. (That's driven by T-Mobile having higher bandwidth among already-great connections, which the authors agree shouldn't count for much.) So T-Mobile doesn't get dinged very much for less reliable >25Mbps down but wins big on itsetter b average, even though the average is given a bit less weight in their formula.
I think a better formula would give AT&T the win. That doesn't invalidate the whole article, which has some pretty thoughtful discussion of the details, but the headline result is probably wrong.