Read it again -- she dismissed her knowledge of audio gear, but then spoke of growing up with musician parents and being around rehearsals and performances for much of her life.
I saw all that. Musician parents are not necessarily talking tweeters, tweeter arrays & mids. But I'll grant that such terms are not deep, deep, hardly-anyone-could-know about this minutia that sometimes flies around in speaker spec debates. That's partly why I don't completely dismiss what is there- just basically pointing out apparent bias, a good reason to be biased and that the conclusion seemed favorable to be predetermined.
She's a reporter. But she didn't report WHERE the tests were done. Who was running the testing? Etc. A lot of phrases sounded PR-ish to me, like maybe whoever was running the tests were emphasizing the positives. Were the test presenters/hosters maybe Apple? Probably, right? If the testing host is biased, a reporter may pick up lines & phrases that the host rolls out in comparisons. If they "know next to nothing about audio hardware", maybe they copy down such phrases to give their article more punch?
If the host was Apple, would Apple zero in on it's own product's shortcomings (if any)? Of course not. Would Apple choose audio test files most flattering to it's product and least flattering to the other products? Of course. Would maybe those files be optimized to sound best on Apple's own product and worse on the competitors? Of course. And so on.
Of course if it sounds overly gushing, positive praise about one choice, maybe someone will question the credibility. So here's a few shortcomings to pin to the favorite that are going to be addressed this year. No deal killers- just some smallish issues with assurances (by a reporter!!!) that it will be addressed in a future software upgrade. Some cons can make the "whole" seem more credible right? If the cons are smallish and fixed soon, they don't turn many readers much away from the favorite.
It may be that this test was truly unbiased & objective, hosted by someone with no desire to show any favoritism to any of these speakers. Perhaps each reporter got to "throw" their own choice of a music file(s) into the pool, with no tweaks or optimization, that the audio files played were exactly the same quality, not lossless here but 64kbps there. Etc.
Again, see my TV-selling reference. Tweak the settings on the set you want to sell and the settings on sets you don't want to sell. Feed the sets you want to sell the best video and feed the other sets a less attractive copy of the same. Etc. People come in, "shop head-to-head" and buy the set you want them to buy. It appeared to be the best. Mission accomplished. Max commission earned.
When the head-to-head is fully OUTSIDE of Apple's control, we'll read objective reviews. Reviewer will set the environment. They'll pick the audio files to be tested- probably a common group of files they use for testing many other small-sized speakers. All such "razzle dazzle" possibilities will be mitigated. It won't be obvious which speaker is the favorite from the start. In fact, they'll go out of their way to identify tangible cons at least as hard as they go to identify tangible pros. The conclusion might end up about the same or it might shift the crown to one of the others. TBD.