Will someone with knowledge on the topic please educate me?
What properties exist in current open-source models that are at odds with Apple's privacy vision? Is there a theoretical way for Apple to employ a good, open-source model while simultaneously satisfying their own vision? The available models seem good - even a slightly knocked-down version would be better than what they're working with today.
It depends how well they actually use the open source models. I personally would not want them touching Deepseek, the Chinese originating model.
Why?
Apple just said "do whatever it takes". That means likely cutting corners in unknown places. If they go with a model, especially one that was made in China, that is a disaster just waiting to happen. Sadly at this point, I think Apple will probably do
anything required to catch up since they are so far behind and I think that also includes not keeping the Apple "Privacy" standards that we are normally used to.
Sure open source code can be looked at, but I don't trust Deepseek whatsoever, and Apple has said that they've actually considered that model. Why do you think the paid API is so insanely cheap to use? No other reason than to completely undercut any and all other AI company's API prices (which I also agree, are for the most part extremely high).
And just because they made the model parts open source, does not mean that Apple will keep up and actually go through the code for each update. They absolutely should, but again
"whatever it takes" has a very ominous tone when it comes to things at this stage at Apple. I don't think they care at this point when things are on fire over there.
I'd feel ok if they used other models from OpenAI, Anthropic, Mistral or even Meta. But I don't think that is where they're headed. While all of those companies collect absurd amounts of data for training and re-training, to me, I would still place that as 100% safer to use than using Deepseek in any form.
I would do
whatever it takes in order to keep my data from going directly into the hands of the Chinese government. So if Deepseek is the future of Apple Intelligence, even if used for 0.0001% of requests, I'm done with it. My trust in Apple would be completely gone at that point.
EDIT: I'm sure I'll get hell for this comment. But please think about it before you thumb it down. Things are definitely heating up and shortcuts are being taken with even more coming with things like this. And this is only the part that has been made public, imagine what they're discussing behind closed doors, I hate to even think about how much further they're pushing things outside of the usual, older Apple-"Safety" box.