I can't speak as to accuracy, and don't care about speed.
But if you look at the WWDC talks, what APPLE appear to care about most for this new model is "distant speech". In my experience MOST of what people complain about in the context of "Siri doesn't understand me" and similar complaints about eg dictation or translation involves distant speech, ie the speaker is a substantial distance (more than a few inches) from the mic.
If they can overcome that (and they claim they have) this opens up a whole lot of possibilities that currently work very badly, for example auto transcription of a lecture, or auto translation of a tour guide standing a few feet away.
If anyone wants to really test these models, that's what they should be testing because that's the problem Apple is trying to solve, not beating Whisper in speed.
Yeah, I’ve noticed speech to text in iOS. Seems like it’s getting worse. I’m not sure if that’s the case but it’s clearly a lot worse than for example speech to text in ChatGPT. That said, I’m dictating this response using iOS 26 beta on an iPad and it seems to be pretty good. One of the big problems with speech to text that I found recently is it seems to completely ignore context - so for example if I said “that was a long wait” sometimes it will transcribe “that was a long weight”…