97% accuracy?
A classifier like
state predictState() {
return "No problems!"
}
will have 200 errors on your data, archiving 96.7% of accuracy.
True. However, the MacRumors article is misleading (oversimplifying?) by speaking of 97% accuracy. It is actually an AUC of 0.97, and accuracy and AUC (Area under Curve) are not the same thing. The AUC measure aims at preventing sitations like the one you describe, and your classifier probably would not get a good AUC.
That being said, it is hard to get an intuitive grasp of such measures, and without an evaluation of the actual costs and consequences resulting from false positives and false negatives it seems hard to tell whether a diagnostic method with an AUC of 0.97 is wonderful or doing more harm than good.