LOL, you failed to prove it again.
I see you keep showing your great despise for everything Apple with your ”Apple sucks” and ”What a joke” even in your new posts in other threads.
No I didn’t fail, but you failed to understand again. I proved exactly what I intended to prove. Your whataboutism and moving the goalposts were never a part of that. Ignoring and denying simple facts just hurts your credibility more each time.
You said there are many GPU intensive software and yet, only Six of them have more Mac users then Windows which are Ollama, LM Studio, Llama, Qwen, Gemma, DeepSeek and all of them are LLM, not even related to GPU intensive software. Others are Windows focused which fail to prove your point. Supporting those apps does NOT mean they are Mac focused. Literally, how many people would use Mac over Windows for GPU intensive software? I checked ChatGPT search and now, I can clearly disapprove your false info.
You’re literally saying first what I said but still try to make a point of what I didn’t say? This wasn’t a contest about which platform has most users and is irrelevant to the original discussion about your first claims. Everybody knows there are more Windows users in the world or in 3D, AI and gaming, so what? My info was never ”false” because it was neither about ”Mac focused” SW nor about more people choosing Mac rather than PC. This clearly disapproves your pointless argument.
Your claim was ”Mac is worst platform for GPU intensive software where most of them dont even support” and I listed many popular GPU intensive applications that do support Mac, both in 3D, AI and game development. You provided no source for your claim about most of those applications not supporting Mac. I even showed that AMD and Intel GPUs are much worse in Blender Render than Apple’s so that fact also refutes your claim about ”worst platform”.
You’re claiming again falsely that LLM and its related applications are not GPU intensive despite my explanations which shows you’re ignoring the facts again. Below is another screenshot from Alex Ziskind’s channel showing the GPU working at 100%.
It’s quite amusing to see that you check the facts with ChatGPT but still make such false claims. Why don’t you ask ChatGPT ”Is LLM CPU or GPU intensive?” and see which answer you get. I make it easy for you and share the answers below:
- Large Language Models (LLMs) like me (ChatGPT) are
GPU-intensive, not CPU-intensive.
- LLMs involve billions of matrix multiplications during both training and inference. These operations are highly parallelizable,
which GPUs are optimized for.
- GPUs provide
10×–100× speedups compared to CPUs for LLM workloads.
- GPUs use high-bandwidth VRAM to store model weights and intermediate tensors.
If a model doesn’t fit in GPU memory, performance plummets as data is swapped between GPU and CPU memory.
- Training:
Extremely GPU-intensive; often uses multiple GPUs or clusters.
- Inference:
Still GPU-intensive, though small models can run on CPU if speed isn’t critical.
- LLMs are
overwhelmingly GPU-intensive, with CPUs handling orchestration and I/O rather than the math.
The same goes for 3D and game development software. The question wasn’t about the number of users but you claimed most such applications don’t support Mac. I mentioned many popular programs that do support Mac and are used frequently.
Whenever people claim that Apple is good only with LLM, what a joke. Who would run that with a single Mac compared to servers or super computers or even AI farms? Can Mac do that? NO. Besides, why do you keep ignoring slow memory bandwidth comparing those GPU? The memory size is NOT everything. LLM is a tiny part of AI and regardless, it can only do LLM.
The joke is on you for moving the goalposts all the time when you’re proven wrong, spreading disinformation, ignoring facts/explanations and not even bothering to ask your ChatGPT. Who said that Apple is only good with LLM? Not me; I even shared many Blender results debunking your first claims about ”M4 series GPUs are not even close to RTX 40 series” and ”Apple can’t even make a high-end GPU”. Again you’re the only one making the claim that ”Apple is good only with LLM”.
The discussion wasn’t either about servers, super computers and AI farms and whether Macs can do that so all that is irrelevant and ridiculous. So now you have to compare a Mac with supercomputers to make a point? Of course a Mac ”sucks” compared to a supercomputer. What did you expect? Can RTX 5090 do what a supercomputer does? No!
Who would run local LLMs and why instead of cloud solutions? Maybe you should ask ChatGPT again:
- Privacy & Security: Your data stays local. Nothing you type leaves your machine, which is crucial for sensitive or proprietary information (e.g., legal, medical, business data). No cloud dependency means no risk of data leaks via APIs or third-party servers.
- Offline Access: You can use the model without an internet connection, perfect for air-gapped systems, field work, or locations with poor connectivity.
- Cost Control: Once set up, a local model has no per-token or subscription costs. Great for high-volume use (e.g., generating large text batches, code, or documents).
- Customization & Control: You can fine-tune or prompt-engineer your own model for your domain (legal, medical, creative, etc.). Freedom to modify system prompts, memory, or architecture — things that are locked in cloud models. You can even chain it with other local tools (e.g., embeddings, vector DBs, custom APIs).
- Latency & Speed: Once loaded into memory, responses can be faster than remote APIs (no network delay), especially for smaller models optimized for your hardware.
- Transparency & Experimentation: You can inspect what the model does — weights, tokenization, inference process. Perfect for research, education, and AI development without external restrictions.
So first you complain about me using ”benchmarks” and ask for real-life tests but now you ignore real-life Blender and LLM tests and start to compare GPU specs? Memory bandwidth wasn’t even discussed to begin with. Yes, 5090 is faster when it comes to smaller LMs but I’ve been talking about local LLMs that require far more than 32GB VRAM in 5090. Memory bandwidth doesn’t mean a thing if you don’t have enough VRAM. Why do you keep ignoring VRAM? The memory size is everything then. I even shared screenshots of the performance difference between M3 Ultra and 5090.
Regardless of how small or big LLM is as a part of AI ”it’s one of the most advanced and complex applications of AI we currently have.” If Macs can do that you can be sure they can do Deep Learning and Machine Learning too because TensorFlow and PyTorch support macOS too and even then large VRAM is very important. Again you’re shifting focus from your original claims. The question wasn’t about how often Macs are used in different fields of work but if ”M4 series GPUs are not even close to RTX 40 series” or ”Apple can’t even make a high-end GPU”.
While you claimed that Apple GPU isn't sucks and not even close to RTX 40 series, literally none of them proves anything. Is it really hard to compare them with actual games and software, not benchmarks? I said that already.
I didn’t just claim. Everything I posted proves my points and you dodging facts doesn’t change anything. So Blender is not a real 3D software and LLM is not real AI? Don’t you hear how ridiculous that sounds? You still don’t realize that it’s not just about benchmarks. Have you even run Blender benchmark? Monster, Junkshop and Classroom are real 3D scenes that are downloaded to your computer before you can run the benchmark and then rendered. The test is about how fast they can be rendered on different HW so it’s as real as it can get. You can even go to
their site and download different demo files for testing and rendering. There is no difference between those demo files and real projects.
In the Classroom scene M3 Ultra is as fast as desktop 5070 Ti and 4070 Ti Super. M4 Max 40c is faster than desktop 4070 Super, 7900 XTX and 5060 Ti.
In the Lone Monk scene the base M4 Max 32c is faster than desktop 4070 Super and almost as fast as 5070. Now imagine how fast M4 Max 40c or M3 Ultra would be.
In the Barbershop scene M3 Ultra is faster than desktop 5070/4070/3080 Ti and 7900 XTX.
Another 3D software is Maya with Redshift. Can’t find test results for RTX 50 series but M4 Max 40c is as fast as laptop 4090 or desktop 4070/3090 Ti. M3 Ultra is as fast as desktop 4080 which is as fast as 5070 Ti.
Games? You can game on a Mac but everybody knows you don’t buy a Mac primarily for gaming and especially not a M3 Ultra, or 5090 for that matter. The most popular PC GPU on Steam is 3060. Everybody knows that Nvidia is best in gaming and nobody said Mac is good at everything but again your first claims wasn’t about gaming but about ”M4 series GPUs are not even close to RTX 40 series” or ”Apple can’t even make a high-end GPU”.
If you only had said ”Nvidia GPUs are best in gaming” we wouldn’t even have this discussion. AMD and Intel can’t beat Nvidia in games either. It doesn’t mean they ”suck” at gaming and neither do Apple GPUs on their own. M4 Max performs as 4060/4070/4080 M in games. Just for comparison M3 Ultra performs like a desktop 4070 in Death Stranding at 4K Ultra but games almost always run faster on Windows thanks to biffy power-hungry GPUs and better optimization.
Besides, you compared Mac Studio to laptops. Laptop RTX 4090 = desktop RTX 4070 which again proved my point. Apple GPU sucks. Good luck with proving your point with false and misleading info.
You’re the one as usual with ”false and misleading info”. First of all you said ”M4 series GPUs are not even close to RTX 40 series”; That includes both laptop and desktop GPUs. Then you casually compared laptop 4090 to desktop 4070 without any proof while everyone can see that’s not the case in the Blender screenshot I posted. In my first post I said and showed that M3 Ultra is under/close to desktop 5070 Ti and almost as fast. In the Blender test by Mathew Moniz M3 Ultra is almost as fast as ASUS ROG STRIX SCAR18 4090 which is very close to desktop 5070 Ti too, not 4070 like you say. Even so it wouldn’t prove your point because you said ”not even close to RTX 40 series” but laptop 4090 and desktop 4070 belong to 40 series which proves my point.
To say ”what a joke” explains much in this discussion because everything seems to be a joke to you. You didn’t know Blender Render is GPU intensive and said it’s ”CPU intensive software”, you said ”most GPU intensive software dont even support Mac”, you said ”LLM is not even related to GPU intensive software”, you said ”memory size is NOT everything” for LLM, ”LLM is a tiny part of AI”, ”Mac can only do LLM” and ”Ultra series suppose to compete with 90 series according to Apple officially”. None of that is true.
So to recap everything:
- Apple GPUs don’t ”suck” and they can be close, as fast or even above RTX 40/50 series GPUs for laptops and desktops.
- Many ”GPU intensive” software support Mac.
- Blender Render is GPU intensive, not CPU intensive.
- LLMs are totally GPU intensive where VRAM is almost everything and more important than memory bandwidth.
- LLM is not a tiny part of AI, but ”one of the most advanced and complex applications of AI”.
- Maya/Redshift, Blender and LLM are ”actual” real software.
- Apple has never compared M3 Ultra to RTX 5090 or any other card in the 50 series.
You keep making all these claims but later forget or ignore conveniently what your own claims were about. Your main mistake was to say Apple is worst instead of saying Nvidia is best. Nvidia being best in many cases doesn’t make Apple worst. There are worse GPUs, like AMD and Intel and Apple can even be best in some cases like LLM. You may think what you want but facts don’t lie. So next time stop generalizing and using fan fiction instead of facts and try to stay on point instead of moving the goalposts.