Maybe they are not confident that their on-device LLM is efficient enough and would perform without slowing down the device and hence, by making it optional, at least the device will function and deliver its primary objectives? Once they are optimised after a few iterations, then people will automatically enable it on their devices.
Apple's LLMs seems to be quite good at running locally. I think it's more about the breadth of requests they can handle.
Request like these is quite likely easy for a local Apple LLM to support:
* Turn off my light
* Provide direction to the nearest gas station
* Play some Christmas songs
Here are some questions/request I did to ChatGPT:
* Where to stay in Porto center for vacation?
* What is Aristotle's view on a happy life?
* Can you provide more details on Eudaimonia as the Highest Good?
* Which battles in WWII had the biggest losses of men?
* f(x) = 1/(x^3+1). Integrate f(x)
* Can an Audi A3 convertible run on E10 petrol?
* Summarize Kuhn's "The Structure of Scientific Revolutions"
* Has a dog played in the NHL?
* What's the definition of a tourist trap?
* I'm going to Norway for a week this summer. What's the best way to see some fjords?
It's these kind of requests I believe Apple might have hit a wall and where services from openAI and Google are better.
Last edited: