Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Years ago Supermicro sold Apple servers that had a virus in the firmware. IIRC, Apple pulled all its Supermicro servers and went with a different brand. I'm surprised Apple would considering them again.

This was a news story that was disproven. Bloomberg got bad info from their sources.
 
It has been a few years since Apple discontinued the XServe; they should consider going back into the server business and release a first-party AI solution. Apple has their own top-of-the-line chips with the M series

Apple could give the industry a run for its money
Talking about machine learning and esp training, Apple has nothing
So much for Apples security and running everything locally on your iPhone.
Running an LLM locally is one story, but training of a LLM is a completely different story. For this task you need the Nvidia chips.
Beside that fact, Honor is already running Metas LLM locally on a smartphone (a less powerful version).

https://www.trendingtopics.eu/honor-chinese-smartphone-giant-integrates-metas-ai-model-llama-2/
 
You cannot run anything close to GPT on a smartphone. They’ll either have to run on the server, or be much more limited in what they do (or both).

In any case, they’ll need the servers to train the models, even if the trained models then run on-device.
Why is the comparison always to GPT? It seems obvious that Apple would not want or need to replicate GPT on the phone.

With respect to training the models: they only have to be trained once. Even if done on servers (I'm not sold on your supposition that this is a self-evident requirement; doesn't that mostly depend on the data size?), why would it entail a massive, multi-billion dollar investment in AI server hardware on Apple's side?
 
Scary how full speed ahead everything AI is considering all the questions and warnings from those much more in the know than the general public. Whats the line between conspiracy and taking necessary precautions?
 
  • Like
Reactions: gusmula
CUDA outperforms Apple GPU by a large margin.
CUDA is a programming platform /API. A GPU is hardware. Saying that CUDA outperforms Apple GPU is like saying pascal outperforms a 486. It’s a nonsense statement.

And yet there is a point buried in the nonsense, which is that Nvidia’s success is more about the widespread use of CUDA and its optimization for Nvidia GPUs than it is about Nvidia’s actual GPUs.

And that is also why Apple can compete with Nvidia for Apple’s business. Apple has some of the best silicon design people and some of the best compiler and software tools people. Writing highly optimized software for their own hardware and for their own purposes is what Apple is all about.
 
CUDA is a programming platform /API. A GPU is hardware. Saying that CUDA outperforms Apple GPU is like saying pascal outperforms a 486. It’s a nonsense statement.

And yet there is a point buried in the nonsense, which is that Nvidia’s success is more about the widespread use of CUDA and its optimization for Nvidia GPUs than it is about Nvidia’s actual GPUs.

And that is also why Apple can compete with Nvidia for Apple’s business. Apple has some of the best silicon design people and some of the best compiler and software tools people. Writing highly optimized software for their own hardware and for their own purposes is what Apple is all about.

Seriously? They're called CUDA Cores.

 
Is Apple really behind? Google just released a laughable LLM that is so biased it thinks that certain people do not exist! Also, everything Microsoft does with AI comes from Open AI! And Microsoft just back stabbed Open AI by investing in the Mistral LLM.
Apple is mostly behind on *hyping* AI honestly
 
Talking about machine learning and esp training, Apple has nothing

Running an LLM locally is one story, but training of a LLM is a completely different story. For this task you need the Nvidia chips.
Beside that fact, Honor is already running Metas LLM locally on a smartphone (a less powerful version).

https://www.trendingtopics.eu/honor-chinese-smartphone-giant-integrates-metas-ai-model-llama-2/
You dont specifically need nvidia’s chips, nv just happens to dominate that segment right now, there are and will be more other players.

Now whether Apple wants to get into that or not is a different story, but they definitely have both the expertise and the cash to do so if they wanted. An updated xserve with an M* chip built for performance and massive shared memory and a hugely boosted GPU and NPs could be an interesting competitor in the space. It’s not the kind of area of the market apple usually targets though, the last thing even close would have been xgrid on the old xserves, and that was never more than barely a footnote in any form of HPC or distributed workloads really
 
all companies / businesses depend on other companies / businesses.
no business is 100% independent / non-reliant on another business.

Apple has repeatedly said that they want to own/control the core technologies for their business. Their development of AppleSilicon (which many, many people said would NEVER happen because how could lowly little Apple compete with mighty Intel) is a prime example of that.
terrible example, Apple Silicon doesn’t exist without TSMC and (formally) Samsung.
Apple says a lot of things, but at the end of the day, while they might design everything in-house from their hardware to their software, they don’t actually *produce and run* pretty much anything on their own.
Their displays are produced by Samsung, their cameras are produced by Sony, their modems are produced by QUALCOMM, and so on.
Even the rumored upcoming Apple modems are still going to be mass produced by TSMC.
Edit: and the ultimate irony that Apple uses Google servers for iCloud…
 
Last edited:
  • Wow
Reactions: gusmula
iPhone, please stop guessing what I want to do. You are wrong 95% of the time and it just ticks me off.
 
Talking about machine learning and esp training, Apple has nothing

Running an LLM locally is one story, but training of a LLM is a completely different story. For this task you need the Nvidia chips.
Beside that fact, Honor is already running Metas LLM locally on a smartphone (a less powerful version).

https://www.trendingtopics.eu/honor-chinese-smartphone-giant-integrates-metas-ai-model-llama-2/
Rumor has it than iOS 18 will be built on AI, so Apple does have some tricks up their sleeve, they have not told us much about it yet, but we will learn more at WWDC next summer. They could run this on their own hardware server platform that is built on their own silicon, and it would give the industry a run for its money. Apple's strong suit is that they develop the hardware, software, and services, and this integration is what makes them so successful.
 
Will be huge to secure an order from Apple. Waiting to hear more about Apple and AI at WWDC
 
Rumor has it than iOS 18 will be built on AI, so Apple does have some tricks up their sleeve, they have not told us much about it yet, but we will learn more at WWDC next summer. They could run this on their own hardware server platform that is built on their own silicon, and it would give the industry a run for its money. Apple's strong suit is that they develop the hardware, software, and services, and this integration is what makes them so successful.
Why din‘t people understand that running and training LLMs are two different pait of shoes. Apple has no powerful silicon to train their LLMs like the H100, but it can run trained LLMs on its A-silicon, of course.

Have a look:
https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet
 
For initial raw power nvidia chips exceed Apple but in efficiency no way! And in the long run its all about efficiency.
They do really exceed in efficiency as well.
And the 4090 is a 5nm consumer GPU launched in 2022 while the M3 Max is on 3nm. When Nvidia will move to 3nm it will at least double 4090's performance at similar power. And the 4090 is consumer gaming GPU not the best Nvidia can do.
In order for Apple to get close to Nvidia's level they need do increase the performance in various workloads by 10X in 1 or 2 generations. I don't see how they can do that.
57700-117526-IMG_4425-xl.jpg
 
They do really exceed in efficiency as well.
And the 4090 is a 5nm consumer GPU launched in 2022 while the M3 Max is on 3nm. When Nvidia will move to 3nm it will at least double 4090's performance at similar power. And the 4090 is consumer gaming GPU not the best Nvidia can do.
In order for Apple to get close to Nvidia's level they need do increase the performance in various workloads by 10X in 1 or 2 generations. I don't see how they can do that.
57700-117526-IMG_4425-xl.jpg
I'm not sure that graphs like this are as informative as they appear.

When software is changing as fast as AI software changes right now, mainly what you are seeing is which software has currently been ported to/developed on which hardware. For example if we took the Mac Whisper software and spent some time porting it to Mojo how would that change things? If we fixed it to use an Apple Silicon optimized flash-attention, how would that change things?
Insanely-fast-whisper does supposedly run on Apple Silicon, but I can find ZERO benchmarks for it; and that's even assuming it makes optimal use of the most recent libraries (like MLX or Philip Turner's Flash Attention).

nVidia deserve their attention – they have worked hard to keep their SW relevant and improved year after year, along with continuous improvement of their hardware.
BUT like I said, a snapshot like the above of where they appear to be today [and actually that graph is from about 2 months ago] is mainly a snapshot of who has bothered to put together an optimized collection of SW and benchmark; it's in no sense a measure of *intrinsic* abilities and *intrinsic* 10x performance or efficiency differences.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.