Not sure where the supposed drama is in Apple not having its own LLM. Not every tech company needs to have every “thing” of their own. Facebook isn’t dying because they don’t have a search engine. Google isn’t dying because they have their own social network. Apple’s business is selling (often great) hardware at premium prices. Search isn’t their business. Social networking isn’t their business. I don’t think AI is their business either.
A friend of mine texted me last night about how he used chatgpt, Google AI studio and Claude to build some stuff for him. Now, I’ll be phasing out a Dynamics CRM solution for a customer soon, which means all their accounts and contacts will need to be exported into something else. I’ll be using Python for that. The Dynamics CRM api uses the typical refresh_token/access_token setup where the access token can expire but you can get a new one through the refresh token.
So I asked an AI to write the Python code to refresh the access token. Of all the choices I had I picked Google AI studio. No real reason other than it was mentioned in the text that I got earlier. The code was written. I’ve used the Dynamics CRM api in other languages and as far as I can tell, the Python code generated by Google looks correct.
Makes me wonder though … does it even matter which AI I got this code from? I’m sure Claude would have spit it out as well. Come to think of it, I bet chatgpt would have too.
So, in the end, does it really matter which AI wrote that code? It certainly doesn’t for me, because it’s irrelevant. The part that matters is that I got the code that I wanted. My customer doesn’t care. Just like they don’t care if I use(d) Stack Overflow for other things.
So, let’s say I’m in Xcode doing some Swift UI stuff. And some AI in Xcode helps me to achieve that. Does it matter if the code was generated by Google, Claude or Chatgpt? No, it doesn’t.
In fact, I’d rather have great code generated by a non-Apple LLM than dodgy code generated by Apple themselves.