And one more thing: Apple wants its own LLM to be all done on device and not on the cloud. And most advanced function that ChatGPT and Google has in terms of AI that is done on a cloud, Apple wants to do it all on device should it come to all done via its own LLM.Apple's decision to partner up with another company to deliver advanced AI features reportedly came after it hit a wall in its own artificial intelligence. Apple is known to have been working on its own large language model (LLM), the algorithm that underpins generative AI, and found that it could power basic features like voice memo transcriptions, photo editing, new Safari search capabilities, and Message auto-replies. However, Apple is said to have realized "early on" that competitors like Google and OpenAI were "far ahead in chatbots and on-the-fly assistance."
Unfortunately even though M4, M3 and A17 Pro chips have very advanced neural engine capable of running good chunk of advanced AI on device, for Apple’s standard, it’s still not powerful enough to run what cloud based AI like ChatGPT and Gemeni can do, on device. It’ll be till A18 or A19/M6 or M7 chip that Apple has very powerful neural engine capable of running powerful LLM engine and AI models all on-device and not in the cloud.
(Yes, true to Apple’s standard, they won’t release a feature that’s already on other phones until Apple perfects it their way, and most of the time, Apple does perfect the feature)