I think it’s important to remember that Apple has a much higher quality bar to meet with MacOS/iOS AI integration.
When ChatGPT hallucinates a response about putting glue on pizza, people laugh at the silly AI and quickly forget about it. There is an acceptance that LLMs hallucinate sometimes. It is considered harmless because it is in a chat window that needs to be explicitly used. It is not considered personal, and is accepted as experimental.
Using a potentially hallucinating AI to manage your calendar, contacts, email, and other personal items is a disaster waiting to happen. Nobody wants a hallucinating AI touching their personal stuff (a lot like nobody wanted mandatory U2 in their music library). A clear example of this problem is the paroxysms of public outrage that occurred when Apple’s AI made some mistakes with news summaries. Nobody expresses such outrage when ChatGPT makes chat summarization mistakes. Most people don’t even notice.
Apple is being subjected to a double standard with regards to AI. Not because of animosity towards Apple. Because Apple needs to apply AI to everyone’s personal items, where people’s tolerance for errors is practically zero.