Yes, Siri seems to have been a dead end that turned out to be nearly impossible to modify and that Apple didn’t bother replacing for too long. It sounds like that is going to change.Apple's "AI" path has been terrible for many years. I remember concluding (and maybe posting here or elsewhere) that Apple's single biggest vulnerability is its reliance on Siri to expand its UI to take advantage of all kinds of new interface processing capabilities. Siri was a bandaid or a stopgap that was probably a mistake to begin with, but reliance on it for more than a year or two when they should have been working on a parallel path to replace it from the beginning was and is a huge barrier for them. Siri is terrible in all of its implementations, on the iPhone, HomePod and CarPlay. I basically never use it, but always wish I could. I just bought a new Sony TV, which has Google TV built in. I bought an Apple TV 4K with it. I told my wife I plan to return the Apple TV as I move toward Google for my Home "AI" setup. Sad, because I do not trust Google, at all, with my privacy. But it is years ahead of Apple in terms of ease of use for its "smart" technology. I still do not understand why Apple didn't see this 10 years ago, and still doesn't seem to see it now. Maybe its the "emperor has no clothes" syndrome, or like the guy who gives a weird, stupid response on Family Feud, but the family says, "yeah, great answer, great answer" and claps for him. Apple's C suite is saying,"Siri is our future, we can make it better, it will catch up, just give it more resources...." Dumb, and sad.
Siri is not the only thing Apple has been doing with AI. They have been doing what they call machine learning for years, which is a functional AI focuses on specific features. Last year they did an experiment with the typing suggestions by using a transformer AI to learn your typing patterns. A transformer is what an LLM is but on a smaller scale.
I imagine an arrangement where Siri has a local LLM that listens to your requests (or via typing) and interprets the intent, even asking clarifying questions. Once it knows that you want, it would check its available functions in the OS or apps or web and bring those together to satisfy your requests. They could expand on the intents frameworks that expose app functionality to Shortcuts to provide a toolbox for the local AI. A lot of apps will have their own machine-learning-style AI functions, as well that could be called by a Siri. For some things, it will make more sense to pass the request on to a server-based AI like Gemini, OpenAI, or Baidu. This whole idea of the local AI that connects to other systems for different functions is similar to the Large Action Model envisioned by those crazy kids that made the Robot R1.
I’m really interested in where this could go.