Years ago Supermicro sold Apple servers that had a virus in the firmware. IIRC, Apple pulled all its Supermicro servers and went with a different brand. I'm surprised Apple would considering them again.
Talking about machine learning and esp training, Apple has nothingIt has been a few years since Apple discontinued the XServe; they should consider going back into the server business and release a first-party AI solution. Apple has their own top-of-the-line chips with the M series
Apple could give the industry a run for its money
Running an LLM locally is one story, but training of a LLM is a completely different story. For this task you need the Nvidia chips.So much for Apples security and running everything locally on your iPhone.
Why is the comparison always to GPT? It seems obvious that Apple would not want or need to replicate GPT on the phone.You cannot run anything close to GPT on a smartphone. They’ll either have to run on the server, or be much more limited in what they do (or both).
In any case, they’ll need the servers to train the models, even if the trained models then run on-device.
CUDA is a programming platform /API. A GPU is hardware. Saying that CUDA outperforms Apple GPU is like saying pascal outperforms a 486. It’s a nonsense statement.CUDA outperforms Apple GPU by a large margin.
EPYC Servers with MI300 and later MI400 racks.
CUDA is a programming platform /API. A GPU is hardware. Saying that CUDA outperforms Apple GPU is like saying pascal outperforms a 486. It’s a nonsense statement.
And yet there is a point buried in the nonsense, which is that Nvidia’s success is more about the widespread use of CUDA and its optimization for Nvidia GPUs than it is about Nvidia’s actual GPUs.
And that is also why Apple can compete with Nvidia for Apple’s business. Apple has some of the best silicon design people and some of the best compiler and software tools people. Writing highly optimized software for their own hardware and for their own purposes is what Apple is all about.
Apple is mostly behind on *hyping* AI honestlyIs Apple really behind? Google just released a laughable LLM that is so biased it thinks that certain people do not exist! Also, everything Microsoft does with AI comes from Open AI! And Microsoft just back stabbed Open AI by investing in the Mistral LLM.
You dont specifically need nvidia’s chips, nv just happens to dominate that segment right now, there are and will be more other players.Talking about machine learning and esp training, Apple has nothing
Running an LLM locally is one story, but training of a LLM is a completely different story. For this task you need the Nvidia chips.
Beside that fact, Honor is already running Metas LLM locally on a smartphone (a less powerful version).
https://www.trendingtopics.eu/honor-chinese-smartphone-giant-integrates-metas-ai-model-llama-2/
all companies / businesses depend on other companies / businesses.
no business is 100% independent / non-reliant on another business.
terrible example, Apple Silicon doesn’t exist without TSMC and (formally) Samsung.Apple has repeatedly said that they want to own/control the core technologies for their business. Their development of AppleSilicon (which many, many people said would NEVER happen because how could lowly little Apple compete with mighty Intel) is a prime example of that.
Rumor has it than iOS 18 will be built on AI, so Apple does have some tricks up their sleeve, they have not told us much about it yet, but we will learn more at WWDC next summer. They could run this on their own hardware server platform that is built on their own silicon, and it would give the industry a run for its money. Apple's strong suit is that they develop the hardware, software, and services, and this integration is what makes them so successful.Talking about machine learning and esp training, Apple has nothing
Running an LLM locally is one story, but training of a LLM is a completely different story. For this task you need the Nvidia chips.
Beside that fact, Honor is already running Metas LLM locally on a smartphone (a less powerful version).
https://www.trendingtopics.eu/honor-chinese-smartphone-giant-integrates-metas-ai-model-llama-2/
Why din‘t people understand that running and training LLMs are two different pait of shoes. Apple has no powerful silicon to train their LLMs like the H100, but it can run trained LLMs on its A-silicon, of course.Rumor has it than iOS 18 will be built on AI, so Apple does have some tricks up their sleeve, they have not told us much about it yet, but we will learn more at WWDC next summer. They could run this on their own hardware server platform that is built on their own silicon, and it would give the industry a run for its money. Apple's strong suit is that they develop the hardware, software, and services, and this integration is what makes them so successful.
Seriously? They're called CUDA Cores.
NVIDIA H100
Transform your AI workloads with the NVIDIA H100 Tensor Core GPU, featuring the new Transformer Engine and NVIDIA AI Enterprise.docs.nvidia.com
They do really exceed in efficiency as well.For initial raw power nvidia chips exceed Apple but in efficiency no way! And in the long run its all about efficiency.
I'm not sure that graphs like this are as informative as they appear.They do really exceed in efficiency as well.
And the 4090 is a 5nm consumer GPU launched in 2022 while the M3 Max is on 3nm. When Nvidia will move to 3nm it will at least double 4090's performance at similar power. And the 4090 is consumer gaming GPU not the best Nvidia can do.
In order for Apple to get close to Nvidia's level they need do increase the performance in various workloads by 10X in 1 or 2 generations. I don't see how they can do that.
![]()