Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So while Apple may not have used user attributed data, Google still knows exactly what Apple is doing in AI, exactly how their models perform, and can compare their own AI with Apple's AI since they have both.

Sounds like a win-win for Google, Apple not so much. I wonder how much intellectual property Apple is giving up just to say "We can do that too!"

This sounds like a pretty big failure at the executive level.
Depends. It sounds more like hiring a private chef into your kitchen and assuming you'll become a good cook.
 
So while Apple may not have used user attributed data, Google still knows exactly what Apple is doing in AI, exactly how their models perform, and can compare their own AI with Apple's AI since they have both.

Sounds like a win-win for Google, Apple not so much. I wonder how much intellectual property Apple is giving up just to say "We can do that too!"

This sounds like a pretty big failure at the executive level.
This isn't exactly true. Cloud based infrastructure usually has pretty good checks in place to protect customer privacy exactly because of this sort of thing. Microsoft can't/doesn't look at everything that's on azure, amazon doesn't look at everything on their web services exactly for this reason.
Yeah I'm sure Apple would put all of the associated IP on their servers without any restrictions... and of course Google would love the potential liabilities 🙄
 
I can't believe Google has a vision for AI while Apple wasted time on a useless headset vanity project for dim Tim.

That and the failed Car project. I think either the Car or the Vision Pro was slated to be Tim's "iPhone moment", and AI wasn't going to cut it. It seems pretty clear that AI wasn't much of a priority for Apple until it became so popular and prevalent that it could no longer be ignored (and I mean "ignored" in the sense that Siri was left to languish for years).
 
  • Like
Reactions: arkitect
I am curious to what degree does the fact, that Apple used less powerful chips for model training, reflect the fact that they are training simpler LLMs.
 
  • Love
Reactions: wilhoitm
Responses here are hilarious.

Apple partners with both Google and Microsoft for various cloud things. What they used (the hardware) has absolutely no bearing on the model or Apple's implementation of the model, no? That's kind of like saying "Apple sucks because they use HP in the data center (vs. Dell for example)".

Apple has an initiative called ACDC (Apple Chips in Data Center) which is how they are creating an infrastructure for AI training using their own (M) silicon.

Tensor Vs. Nvidia is kind of like arguing over the brand of shovel you use to dig a trench. It does not mean that Apple Intelligence is all of a sudden a Google Gemini AI rebranded.

From the article:

Unlike Nvidia, which sells its chips and systems as standalone products, Google provides access to its TPUs through cloud services. Customers using Google's TPUs have to develop their software within Google's ecosystem, which offers integrated tools and services to streamline the development and deployment of AI models.

I don't know if it really makes practical difference, but they did not just use the hardware, and it does affect how they model and implement the model.

If anyone can trust Google it's probably Apple, but their cloud services seem to be a distant third behind Amazon and Microsoft. It will be interesting how this plays out in a few years when Apple has their own hardware ready.
 
Apple is reportedly developing their own AI chips. To buy themselves time, it makes sense they are going with a rented solution, rather than buying a lot of hardware they’d want to scrap in 2-3 years. And maybe they evaluated bot Google and Nvidia and found the former to be more cost-effective.
And don't forget their bad blood with Nvidia
 
  • Love
Reactions: wilhoitm
So while Apple may not have used user attributed data, Google still knows exactly what Apple is doing in AI, exactly how their models perform, and can compare their own AI with Apple's AI since they have both.

This sounds like a pretty big failure at the executive level.

1) Apple have used/are using third party cloud services for their iCloud infrastructure and has since day one.
2) Apple hired compute services from Google Cloud which thousands of other companies are doing.
3) Apple can secure their work on Google hardware by using encryption
4) It would be a breach of contract for Google to spy on their customers and it would be disastrous for their cloud services if they did and someone found out.

But here's the kicker:

Apple PUBLISHED THE INFORMATION THEMSELVES in a research paper! That's the entire reason for this article.
The research paper tells the whole world how they did it, tricks they used and how good the result was.


There is no need for spying, Google can just read the research paper!
 
So at the risk of not having all the info (I did not read the white paper), did apple use Googles chips to 'train' the models or are they planning to use them to 'run' the models? It could be two different use cases.
 
Apple IS already reliant on Google services. Where do you think icloud resides?? Of course they are going to chose a partner they are already doing business with. Thats called business sense.
brother I think you are getting Google’s search/ad services confused with AWS 😆

either way, fair about Apple choosing Google’s chips over Nvidia, particularly given the bad blood between Apple & Nvidia that was already discussed earlier in the thread. interested to see if this affects NVDA’s/GOOG’s share price, given the excitement around Apple Intelligence.
 
  • Like
Reactions: Dj64Mk7
Securing Nvidia chips takes time and money- Apple only had one so they settled for what they could get. I’m concerned about the precedence this sets, because it caught them flat-footed and they’re in damage control mode. Compromising on the foundational piece instead of delaying like they say they do- going with best of breed, and not being first but being best. This is a move exuding an utter lack of confidence. It’s got a shiny veneer though! Leaves me concerned for Apple Intelligence. Let’s hope that doesn’t become a running gag like the mention of Siri.

The precedent was set some 15 years ago if not longer ago. Apple doesn't have server hardware. They also use Linux, Windows in their own datacenter and in third-party datacenters. This is the same.

Training Apple foundation models for Apple Intelligence doesn't require an enormous amount of compute power or time. Apple's models are quite small compared to the newest LLMs from OpenAI etc.

I wouldn't be surprised if training only took a few days and less than a week. These models doesn't need to be trained all the time since their domain is very limited.
 
While your statement is true it has no bearing on the current topic.
I would have hoped people would realize that it’s an analogy and not just a random statement, lol. The bearing it has to the conversation is that just because Apple is using Google’s tech doesn’t mean they are throwing Privacy out the window.
 
  • Like
Reactions: Dj64Mk7
Hokding a grudge doesn’t make economic sense. They could easily design a contractual agreement that prevents the earlier disasters they had with Nvidia. Maybe they even did now, but Nvidia didn’t accept it.

I think the point is that other companies will look at Nvidia and learn that you can't cross Steve Jobs and survive.

Er....
 
  • Love
Reactions: wilhoitm
Apple PUBLISHED THE INFORMATION THEMSELVES in a research paper! That's the entire reason for this article.
The research paper tells the whole world how they did it, tricks they used and how good the result was.


There is no need for spying, Google can just read the research paper!

Not only did Apple describe what they did, they also provided a link to the source code for the libraries they used to do it. The paper also references all of the other, prior papers that revealed different optimizations for training and inference. It also talks about the source of their training data. I have no idea why people are talking about privacy in the context of this paper.

Frankly, I'm beginning to become a little concerned about the level of AI/ML expertise of the commenters here.

So at the risk of not having all the info (I did not read the white paper), did apple use Googles chips to 'train' the models or are they planning to use them to 'run' the models? It could be two different use cases.

This paper talks about training small models. The size of the models (very small) suggests what they learn here will be used for on-device inference.

FYI, as I was doing research on this, I learned something that others may find useful: Anthropic also uses TPUs for training. https://www.datacenterdynamics.com/...-tpu-v5e-chips-to-train-generative-ai-models/

In the end, math done by any chip will result in the same numbers, so the provider doesn't matter. You just want the weights calculated as cheaply as possible. If TPUs will get you the weights for less, then the smart business decision is to use those.
 
Last edited:
So not only was Apple reliant on Google services, but it was also comically lagging behind Nvidia’s hardware offerings. It makes you wonder what outdated tech Siri is still running on.

Apple isn't making huge language models as part of Apple Intelligence. They don't need huge compute power to train most of the foundation models for Apple Intelligence.

The TPU v4 is from 2021 and they only needed 8192 of them for the server model. No need to use Nvidia which would have required Apple to rewrite most of their tools.
 
Apple was already in bed with google but this is a new leap forward that deepens the privacy stance of the company. How can you promise to maintain privacy while usi googles servers? That’s idiotic! Are they giving google all our data? Even if they remove the identifiers it’s still finger-printable.

There are two independent processes. Training a language model is completely separate from running it.

Yes, Apple could use your data to train the model, but they say they don't. The paper discusses what kind of data they use for training.

Also, just because Apple is using Google Cloud, doesn't mean that Google have access to anything readable.
 
  • Like
Reactions: Dj64Mk7
This disclosure feels like a way out when Apple Intelligence doesn’t perform as expected.

“It’s because we used Google Tensor chips.”

Either that or Google paid them to use Tensor. Apple rarely discloses the hardware they use to develop stuff. There was no reason to do so here, especially given the cloud based computing and the privacy stance they have towards that.

It's a RESEARCH paper. The purpose of a research paper is to provide a lot of information on what you did and how you did it. In a 47 page paper, the TPUs is mentioned in three sentences.

The quality of Apple Intelligence training has nothing to do with the hardware.
 
  • Like
Reactions: Dj64Mk7
In this thread, a bunch of people who don't understand the difference between training a model and running an inference of one.

Apple used Google's TPU to create the model. This is incredibly CPU/GPU intensive and you don't need to run this often. This costs a fortune to do and it makes sense that they'd use a third party for that. This doesn't put anything Google related on your phones.

The inference isn't as intense, this is where they use and store the model on. This is the M2 Ultra farms that we heard about.

Also for those saying the Nvidia and Apple are still in a grudge, they are not.
See this: https://blogs.nvidia.com/blog/omniverse-apple-vision-pro/
 
So not only was Apple reliant on Google services, but it was also comically lagging behind Nvidia’s hardware offerings. It makes you wonder what outdated tech Siri is still running on.

Siri is most probably running on CPUs from Intel, probably on Linux machines. Maybe it uses GPU for something.
 
  • Like
Reactions: Dj64Mk7
So.. are they acknowledging that the Apple Silicon Neural Engine is crap? I thought they always marketed it as the best for machine learning

It was made for running machine algorithms on-device.

Apple's SoC was never made for training large language models or other compute heavy ML algorithms in a datacenter.

Training a language model is independent from running it.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.