Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Maybe they'll eventually go Nvidia once the old dinosaurs are retired... or maybe they want to eventually make their own AI chips and make them look better without doing much work. Don't set the bar too high lol.

No, they won't go Nvidia. Nvidia uses among other things CUDA which would mean Apple would have to rewrite a lot of their toolset both for training and running.
 
Last edited:
  • Love
Reactions: wilhoitm
It'll only have deepend because Nvidia are no longer just a GPU company. They are the leader when it comes to AI hardware. Nvidia have been playing the AI game much longer than the likes of Microsoft or Google. Their consumer GPUs have had AI tensor cores for about 4 years now. That's why they have grown significantly recently. They saw the future vision. Everyone else jumped on the bandwagon and Apple is last to get onboard.

Google released their first TPU in 2015, 9 years ago.
 
What an ABSOLUTE JOKE!

Tensor ... chips that are 2yrs behind the industry standard and using old Samsung Foundry manufacturing processes until next year!

This is a colossal failure of endeavor on Apple's part. The ONLY reason to use Tensor chips in 2024 is simply for being efficient in your code as the SoC is garbage! I'm not just talking out my arse here cause even Samsung's much hated SoC the Exynos 2400 outperforms on ALL measurements and in real world tasks and applications as well.


Sorry ONLY in video playback minutes the Google Pixel 8 which is the tensor chip Apple supposedly used is 'SLIGHTLY' better by just a few minutes more.

Maybe you should read up the difference between training and running a language model.

What hardware you use to train the model can be completely different from the hardware you use to run it. Also, the performance of the model is independent of the hardware it's trained on.
 
Apple is reportedly developing their own AI chips. To buy themselves time, it makes sense they are going with a rented solution, rather than buying a lot of hardware they’d want to scrap in 2-3 years. And maybe they evaluated bot Google and Nvidia and found the former to be more cost-effective.

I'm not even sure Apple will create hardware for training large language models, just for running them in the cloud.

Google is definitely cheaper. The TPU v4 is from 2021 and there is no way Google could charge more than the latest from Nvidia.
 
I am curious to what degree does the fact, that Apple used less powerful chips for model training, reflect the fact that they are training simpler LLMs.

This is what a lot of people in this thread don't understand.

Some of the language models Apple needs for Apple Intelligence is like driving miss Daisy compared to the general language models which OpenAI, Google and others are creating.
 
It was made for running machine algorithms on-device.

Apple's SoC was never made for training large language models or other compute heavy ML algorithms in a datacenter.

Training a language model is independent from running it.
I invite you to watch again the announcement of every M series chip where they keep advertising the Neural Engine as the best for training and running machine learning models.
 
So while Apple may not have used user attributed data, Google still knows exactly what Apple is doing in AI, exactly how their models perform, and can compare their own AI with Apple's AI since they have both.

Sounds like a win-win for Google, Apple not so much. I wonder how much intellectual property Apple is giving up just to say "We can do that too!"

This sounds like a pretty big failure at the executive level.

This was a research paper. This was Apple doing R&D using Google’s services. I seriously doubt Apple used any kind of “private” data to train those models. And I seriously doubt Google “looked” through any of that data. Google only “steals” your user data that you agreed to give them for using their free services.

Apple has stated previously that none of this research data goes into training the models they ship as part of their products.
 
Oh boy! The comments here is a new low for this forum. I never expected to see a bunch of very ignorant takes on the topic. Can't people just shut it if they don't understand something? This is pathetic.
 
I invite you to watch again the announcement of every M series chip where they keep advertising the Neural Engine as the best for training and running machine learning models.
You are very ignorant if you think that Appe's Neural Engine is meant for training and running large foundation or machine models in the cloud. It's a freaking small section on their chip mainly for running small machine learning models ON-DEVICE!
 
I am convinced most accounts on this forum are bots cause humans can’t be this stupid right???

Everyone is an expert on AI but most cant tell the difference between training a model and running a model or the difference between a TPU and Google’s Tensor chip in its Pixel phones.

just cause you use google cloud, doesn’t give google access to your data, you shmucks… ffs, there’s terms of service and legally binding agreements in place.

One thing that this whole AI thing has exposed is how frikin low the bar is for the general public when it comes to intelligence.

Mods do **** to prevent misinformation as well, most of the time they are as clueless.
 
I am convinced most accounts on this forum are bots
Not sure if you are joking, but I was already starting to think that myself. I'm just not sure what the point would be. I guess they could just be creating content to provoke real users into responding, which is then used for training data? But if there are a bunch of bots masquerading as humans by competing players, how could they tell which was actual human content?

Regardless, this is definitely one of the lowest quality conversations I've seen on MR.
 
  • Like
Reactions: Halmahc and Dj64Mk7
Apple and Google are competitors but they do concede on strategic partnerships.


Apple flipping HATES Nvidia but so does the rest of the industry to be quite honest.


Kinda figured Apple would look for an alternative to relying on Nvidia for something of such high importance. Big win for Google.
 
Apple missing the boat on AI is incredibly disappointing. I can't believe I'm going to have to go back to a PC for my dev work, but I absolutely cannot justify paying an exorbitant premium for a new Apple desktop that was not designed for the one thing it should have been designed for. I feel like my colleagues are using Jarvis and I'm using a Speak and Spell. Every time I think about all the resources Apple devoted to the AVP, I want to drive to Cupertino and leave a floater in Tim Cook's toilet (it's a reference to "The Boys", watch it, it's fantastic. I am Ashley rn).
 
I am convinced most accounts on this forum are bots cause humans can’t be this stupid right???

No, actually, people really are this stupid on technical topics. Too much "LOOK AT ME, I HAVE AN ANSWER," not enough, "I'm here to learn something new before I respond."
 
  • Like
  • Sad
Reactions: dannys1 and Dj64Mk7
Absolutely comical from Apple, you couldn't make this up.

Google tensor gets a load of stick from tech "journalists" but its still the best for AI.

Dumping iPhone 15 pro for Pixel 9 Pro on the 13th, this is just embarrassing from Apple.
They aren't using the Tensor SoCs found in Google Pixels, they're using their Tensor Proccessing Units. The Tensor NPU found in the Tensor SoCs used in Google Pixels are significantly slower than the Neural Engine.
 
So.. are they acknowledging that the Apple Silicon Neural Engine is crap? I thought they always marketed it as the best for machine learning

There's some pretty bad takes and misunderstanding in the first page of this post but this one takes the cake.


I mean this is so far wrong i don't know where to start. The NE is a part of a consumer chip on an Apple product that helps with ML on device with low power for that product - it spots faces in Photos for instance.

This has absolutely NOTHING to do with training LLM models. You need insane amounts of compute power to do it, it's not even a chip designed for it, even you had a million of them. They would either use Nvidia's Blackwell or Google's Tensor. Neither chip has anything to do with how good an LLM performs or what it can do, they simple provide the compute power for training the models.
 
  • Like
Reactions: Halmahc and Dj64Mk7
So while Apple may not have used user attributed data, Google still knows exactly what Apple is doing in AI, exactly how their models perform, and can compare their own AI with Apple's AI since they have both.

Sounds like a win-win for Google, Apple not so much. I wonder how much intellectual property Apple is giving up just to say "We can do that too!"

This sounds like a pretty big failure at the executive level.

No they don't. Just because they've provided computer power to do processing doesn't mean they can see that processing or what it achieves. They've used TPU to train the LLMs that's all.
 
Apple has very bad relations with Qualcomm, but they ended up going back to them for best performance.

In this case, it’s clear Nvidia has the best hardware for developing AI.

It's irrelevant if their processors are slightly better. This is like saying is the best music made using an Intel processor or an AMD processor - it doesn't matter. Even if one is much more powerful than the other it doesn't affect the end results. Apple just needed mountains of computer power to train the LLMs on. They probably went with the cheapest and more available - which processor it actually was is totally irrelevant to us or the end result.
 
  • Like
Reactions: Halmahc and Dj64Mk7
Apple was already in bed with google but this is a new leap forward that deepens the privacy stance of the company. How can you promise to maintain privacy while usi googles servers? That’s idiotic! Are they giving google all our data? Even if they remove the identifiers it’s still finger-printable.

Oh god, where to begin with this one too. We won't be using Google servers when processing data.

They've used Googles CPU to train the LLM model - the one which will then sit on our devices and process on device. The Google serves wouldn't have seen anything anyway - it's not like cloud computing a web server, they're renting CPU computer power needed to train and create the models.

The first few posts on this page are so so confused by this yet so confidently vocal too.
 
How disingenuous of certain people in this thread...

Did Apple really “admit” to it in defeat, as a spokesperson looked down and kicked the ground, OR did they DISCLOSE it on their research papers? Is it not standard practice to say what you’ve trained your model on?
 
  • Love
  • Like
Reactions: Halmahc and Dj64Mk7
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.