Siri still has its drawbacks and doesn’t sound the best, but I use it constantly. To run scenes in my home, play music, start workouts, start navigations, and make phone calls. Everyone wants an “LLM Siri,” and sure, that’ll feel like a huge leap: more natural, conversational, and back-and-forth. But honestly, I don’t think that’s the real challenge Apple is dealing with.
What Apple announced a year and a half ago still goes far beyond what any other OS or assistant is doing today. Google only just made similar announcements, and Alexa Plus is getting pretty mixed reviews. I think Apple’s focus is on solving the harder problem, making Siri genuinely useful across your apps and devices, not just giving you LLM-style answers. The real challenge is integrating those two layers, as Craig Federighi mentioned.
When you think about it, what Apple described back then was essentially an agentic AI, one that can understand app UIs, navigate them, and carry out complex tasks on your behalf. At the time, nothing like that existed. Even now, there’s still no equivalent on Android. Gemini can handle some basic actions, but not at the depth Apple originally outlined. So it makes sense that it’s taking them time to make it work. But in reality, Apple isn’t “behind” — no one else has delivered those features either.
And if you’ve tried Workout Buddy on the Apple Watch, I think that’s a glimpse of what’s coming. It sounds incredibly natural (ChatGPT-level voice quality) and generates context-aware feedback in real time based on your pace and workout history. That shows Apple is more than capable of building an LLM-level system.
Interested in hearing everyone's thoughts!
What Apple announced a year and a half ago still goes far beyond what any other OS or assistant is doing today. Google only just made similar announcements, and Alexa Plus is getting pretty mixed reviews. I think Apple’s focus is on solving the harder problem, making Siri genuinely useful across your apps and devices, not just giving you LLM-style answers. The real challenge is integrating those two layers, as Craig Federighi mentioned.
When you think about it, what Apple described back then was essentially an agentic AI, one that can understand app UIs, navigate them, and carry out complex tasks on your behalf. At the time, nothing like that existed. Even now, there’s still no equivalent on Android. Gemini can handle some basic actions, but not at the depth Apple originally outlined. So it makes sense that it’s taking them time to make it work. But in reality, Apple isn’t “behind” — no one else has delivered those features either.
And if you’ve tried Workout Buddy on the Apple Watch, I think that’s a glimpse of what’s coming. It sounds incredibly natural (ChatGPT-level voice quality) and generates context-aware feedback in real time based on your pace and workout history. That shows Apple is more than capable of building an LLM-level system.
Interested in hearing everyone's thoughts!