We'll probably be hearing more about the Siri chatbot in the coming months. Apple is aiming to unveil the functionality in iOS 27, iPadOS 27, and macOS 27, which will be previewed in June at WWDC.
Article Link:
Will Apple Charge for Its Siri Chatbot?
Apple's on record as stating that AI is a core function of the device, and charging for it makes no more sense than charging for multitouch screens. And, as a hardware manufacturer, their interest lies in improving hardware to the point where online AI services are no longer necessary because it all runs on-device.
The trend is moving away from giant frontier models towards foundation models that are finetuned for different uses. Currently, there are several on-device examples, ranging from the `com.apple.fm.language.instruct_3b` model used for most purposes, as well special purpose models like the `com.apple.gm.safety_deny_output` and `com.apple.gm.safety_embedding_deny` models used to improve safety by checking user input and model responses.
Additionally, simply relying on model knowledge (the driver of huge pre-trained models) is known to be a path to hallucination, leading to more of a focus on using tools to obtain current knowledge, or to do things like calculations, which LLMs can't do.
Based on the state of the art in the open-source LLM world, it's quite likely that an optional "chatbot Siri" app -- one that appears as an App to interact with in the foreground, as opposed to the actual Siri assistant, could easily be implemented using on-device models and the PCC - along with the tools provided by the apps on the device and the Internet - that would more than be sufficient for the majority of users. And it's definitely conceivable that for those doing a task that requires huge context windows or immense amounts of computer, well, Gemini can replace ChatGPT as the optional extension.