Also very interested in this question if anyone has the iOS 18 or macOS Sequoia betas installed right now, please let us know what you're able to learn! I'm very curious to know what language model is powering the new transcription feature in the Notes app and the Voice Memos app (which I presume utilize the same ML frameworks across all Apple platforms). I'm finally seeing real-world examples of the transcription (screenshots from public beta videos attached) , but want to know what model is used and how it benchmarks against OpenAI's Whisper v3 (which consistently performs superbly).