The way I see it, it probably will push Apple to find better ways to come up one better. So I think all is good, as least for now.One of the prides for the Apple Silicon was the strong Single Core performance comparing to Intel which had been stuck for years in single core performance. Now, 2 years later, Apple Silicon is already behind Intel in single core, that's a pity.
Apple is Doomed! 🧐One of the prides for the Apple Silicon was the strong Single Core performance comparing to Intel which had been stuck for years in single core performance. Now, 2 years later, Apple Silicon is already behind Intel in single core, that's a pity.
Correlation does not mean causation. More people are buying laptops as there is an increased need for portability (e.g. take to school, cafe, etc) given all the new concepts (e.g. every child has a computer, digital nomads, digital education, etc). Laptops always have less user-serviceable internals —particularly any consumer machine— as soldered comportment tends to be cheaper and more space efficient.They do, but fewer and fewer people buy a computer that even has internal expansion.
You can get high-precision computations with GP-GPU computing. However, it is easier for untrained programmers to screw up the robustness and accuracy of their code. It does not help that both AMD and nVidia have been restraining precision on the consumer GPUs to keep their high-margin workstation GPUs.For highly tasks that are highly parallelizable and don't need high precision, yes. For everything else, you still want a CPU.
Correlation does not mean causation. More people are buying laptops as there is an increased need for portability (e.g. take to school, cafe, etc) given all the new concepts (e.g. every child has a computer, digital nomads, digital education, etc). Laptops always have less user-serviceable internals —particularly any consumer machine— as soldered comportment tends to be cheaper and more space efficient.
You can get high-precision computations with GP-GPU computing.
Out of curiosity (maybe @leman knows answer) what is the hit on Apple Silicon if you try to do FP64 ops on GPU?I'm not sure what that has to do with anything, or with "correlation does not mean causation". The reasons most people buy laptops don't change that most people buy laptops.
GPUs optimize towards high performance at the cost of precision.
GPUs optimize towards high performance at the cost of precision.
Out of curiosity (maybe @leman knows answer) what is the hit on Apple Silicon if you try to do FP64 ops on GPU?
Ah that doesn't sound to bad, AMD is 16:1 (FP32:FP64), nvidia is worse than that (64:1). So seems like Apple would be squarely in the middle. Note this is consumer specs.GPUs implement the usual FP standards and do correct rounding for the basic operations, so I’m not sure I’d put it this way. Sure, hardware accelerated trigonometry etc. is often less accurate but you can always implement your own, ore accurate routines.
I don’t know if Apple GPUs implement denormals but they are not critical to achieving good numerical accuracy.
They don’t have any native support for FP64. So you’d need to use extended precision arithmetics, e.g. http://andrewthall.org/papers/df64_qf128.pdf
Depending on what you need the performance overhead can vary greatly. You need anywhere from five to dozens of instructions to implement these things.
Ah that doesn't sound to bad, AMD is 16:1 (FP32:FP64), nvidia is worse than that (64:1). So seems like Apple would be squarely in the middle. Note this is consumer specs.
It changes why there are fewer people buying 'upgradable' machines.I'm not sure what that has to do with anything, or with "correlation does not mean causation". The reasons most people buy laptops don't change that most people buy laptops.
Not necessarily. As @leman suggested, if you implement everything on your own then yes, you can get less precision. However, there are multiple approaches and libraries meant for high performance while ensuring good to high precision; e.g. GARPREC, StarNEig, etc. The idea that 'parallel is fast but unprecise' is, nowadays, outdated as you can achieve the required precision for probably 99.9% of the cases. Of course, a large chunk of research is still done with supercomputers in mind but things tend to eventually scale down to consumer electronics. Even consumer Matlab offers double precision on CUDA calculations by default ---though, the trend is nowadays to move to mixed-precision systems.GPUs optimize towards high performance at the cost of precision.
It changes why there are fewer people buying 'upgradable' machines.
The idea that 'parallel is fast but unprecise' is, nowadays, outdated as you can achieve the required precision for probably 99.9% of the cases.
Not that Apple's current lineup "can't", they have the ability to use eGPU.Even if you bring up eGPUs, most laptops can't do that. Apple's current line-up sadly can't, and for other vendors, most laptops don't even have Thunderbolt at all.
Not that Apple's current lineup "can't", they have the ability to use eGPU.
It's that the macOS on Apple SoC doesn't have the drivers.
The hardware can use eGPU, it's purely a software limitation.
Nope. According the Hector Martin, it is a hardware limitation on the M1 at least as far as anyone knows. From the Asahi Linux main developer, there is a PCIe Base Address Register (BAR) limitation that breaks eGPU drivers. Unless someone at Apple contradicts this, it sounds like a fundamental limitation.Not that Apple's current lineup "can't", they have the ability to use eGPU.
It's that the macOS on Apple SoC doesn't have the drivers.
The hardware can use eGPU, it's purely a software limitation.
It is quite sad. I have a 3080 as an eGPU on my HP X360 1040 G7 and it works great for ML and other scientific computing work. It can even play some games quite well —as long as I use an external monitor connected to the GPU.Sure, but the original assertion was "Well that’s one way to look at it. But graphics cards exist for a reason."
Which is true, but the context "yeah, but most people cannot actually use one of them" is important. Even if you bring up eGPUs, most laptops can't do that. Apple's current line-up sadly can't, and for other vendors, most laptops don't even have Thunderbolt at all.
Fair.
But, while more and more code can use GPUs, I assure you there continues to be a lot of software in many areas that cannot make meaningful use of them. Most tasks consumers do don't benefit from parallelization, and most tasks enterprises do continue to be of the CRUD type — the CPU is rarely the bottleneck, and the algorithms are so simple that parallelization would often introduce latency instead of improving throughput.
There are certainly scenarios where "throw more cores at it" or "move it from CPU to GPU" are a great option, but there is still tons and tons of code written today where the use case doesn't lend itself well to those approaches, no matter how you implement it.
Nope. According the Hector Martin, it is a hardware limitation on the M1 at least as far as anyone knows. From the Asahi Linux main developer, there is a PCIe Base Address Register (BAR) limitation that breaks eGPU drivers. Unless someone at Apple contradicts this, it sounds like a fundamental limitation.
Conceivable that they’ll consider it a “pro” feature that the M2 Pro or M3 Pro gets (and that maybe eventually trickles down to non-Pro)
Or maybe Apple is confident enough in the capabilities of their own M chips that they don't deem e-GPU support necessary. There was a case for it back when the thin form factor of the 2016 intel MBPs meant that Apple had to fall back on less powerful AMD graphics cards to avoid excessive throttling. Now, maybe they just want to go all-in on their own custom silicon and cut third party vendors out altogether?Conceivable that they’ll consider it a “pro” feature that the M2 Pro or M3 Pro gets (and that maybe eventually trickles down to non-Pro)
Or maybe Apple is confident enough in the capabilities of their own M chips that they don't deem e-GPU support necessary.
There was a case for it back when the thin form factor of the 2016 intel MBPs meant that Apple had to fall back on less powerful AMD graphics cards to avoid excessive throttling.
Now, maybe they just want to go all-in on their own custom silicon and cut third party vendors out altogether?
Why would you think so? Apple has no interest whatsoever in supporting eGPUs.
Don't you need thunderbolt speeds to support their studio display? I assume it's there because their most expensive accessory demands it.Like, you could make the same argument about Thunderbolt: why support that at all? USB-C is good enough for almost all users. Dropping Thunderbolt support would simplify various aspects of their software and hardware architecture. Yet, they offer it on all Macs, even the lowest-end Air and mini (but not the discontinued 12-inch MacBook), and even their higher-end iPads.
Says who?
You can make the same case about supporting Thunderbolt. Or doing another Mac Pro.
Apple does ship thunderbolt devices and the means to build drivers for devices connected via PCIe and thunderbolt (just not GPU).
They also made it clear that they want to make an Apple Silicon Mac Pro (which does not mean they will)
Unlike eGPUs, which are not supported on Apple Silicon, where we have no evidence that Apple plans to offer any support and where Apple engineers went on the record multiple times stating that Apple Silicon computers ship with UMA and Apple GPUs.
There was some speculation on these forums that Apple showed a prototype of the Apple silicon Mac Pro to some of their target market and got a negative response. Hence we are still waiting for an ASi Mac Pro because no one would buy one as designed. I have no idea if that speculation is correct but it would explain the delay.I still don't see the Mac Pro doing UMA. It would be a dumb, pointless product. Are you saying it will only have integrated memory? And that it will have PCIe slots, but not for GPUs? Or that it won't have PCIe slots at all? What's the point of any of that?
This is a circular argument. "They do it, therefore, they do it."
Yes, and they have previously introduced eGPU support, and for now they haven't ported it to ARM.
I still don't see the Mac Pro doing UMA. It would be a dumb, pointless product. Are you saying it will only have integrated memory? And that it will have PCIe slots, but not for GPUs? Or that it won't have PCIe slots at all? What's the point of any of that?