Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
One of the prides for the Apple Silicon was the strong Single Core performance comparing to Intel which had been stuck for years in single core performance. Now, 2 years later, Apple Silicon is already behind Intel in single core, that's a pity.
 
One of the prides for the Apple Silicon was the strong Single Core performance comparing to Intel which had been stuck for years in single core performance. Now, 2 years later, Apple Silicon is already behind Intel in single core, that's a pity.
The way I see it, it probably will push Apple to find better ways to come up one better. So I think all is good, as least for now.
 
They do, but fewer and fewer people buy a computer that even has internal expansion.
Correlation does not mean causation. More people are buying laptops as there is an increased need for portability (e.g. take to school, cafe, etc) given all the new concepts (e.g. every child has a computer, digital nomads, digital education, etc). Laptops always have less user-serviceable internals —particularly any consumer machine— as soldered comportment tends to be cheaper and more space efficient.

For highly tasks that are highly parallelizable and don't need high precision, yes. For everything else, you still want a CPU.
You can get high-precision computations with GP-GPU computing. However, it is easier for untrained programmers to screw up the robustness and accuracy of their code. It does not help that both AMD and nVidia have been restraining precision on the consumer GPUs to keep their high-margin workstation GPUs.

It took a while, but we have reached the point where most professional applications are exploiting GPUs. Similarly, we are now in the process of enabling GP-GPU computing to web apps. Of course, the best solution would have been to get rid of the von Neumann architecture —as colleagues used to support ~10 years ago— but that would never happen; too many enterprises rely on legacy software so any transition needs to be painfully slow. This is something Intel knows, hence, their entering to dGPU market and adopting the small.BIG architecture for CPUs.
 
Correlation does not mean causation. More people are buying laptops as there is an increased need for portability (e.g. take to school, cafe, etc) given all the new concepts (e.g. every child has a computer, digital nomads, digital education, etc). Laptops always have less user-serviceable internals —particularly any consumer machine— as soldered comportment tends to be cheaper and more space efficient.

I'm not sure what that has to do with anything, or with "correlation does not mean causation". The reasons most people buy laptops don't change that most people buy laptops.

You can get high-precision computations with GP-GPU computing.

GPUs optimize towards high performance at the cost of precision.

 
I'm not sure what that has to do with anything, or with "correlation does not mean causation". The reasons most people buy laptops don't change that most people buy laptops.



GPUs optimize towards high performance at the cost of precision.
Out of curiosity (maybe @leman knows answer) what is the hit on Apple Silicon if you try to do FP64 ops on GPU?
 
GPUs optimize towards high performance at the cost of precision.

GPUs implement the usual FP standards and do correct rounding for the basic operations, so I’m not sure I’d put it this way. Sure, hardware accelerated trigonometry etc. is often less accurate but you can always implement your own, ore accurate routines.

I don’t know if Apple GPUs implement denormals but they are not critical to achieving good numerical accuracy.

Out of curiosity (maybe @leman knows answer) what is the hit on Apple Silicon if you try to do FP64 ops on GPU?

They don’t have any native support for FP64. So you’d need to use extended precision arithmetics, e.g. http://andrewthall.org/papers/df64_qf128.pdf

Depending on what you need the performance overhead can vary greatly. You need anywhere from five to dozens of instructions to implement these things.
 
  • Like
Reactions: Athonline
GPUs implement the usual FP standards and do correct rounding for the basic operations, so I’m not sure I’d put it this way. Sure, hardware accelerated trigonometry etc. is often less accurate but you can always implement your own, ore accurate routines.

I don’t know if Apple GPUs implement denormals but they are not critical to achieving good numerical accuracy.



They don’t have any native support for FP64. So you’d need to use extended precision arithmetics, e.g. http://andrewthall.org/papers/df64_qf128.pdf

Depending on what you need the performance overhead can vary greatly. You need anywhere from five to dozens of instructions to implement these things.
Ah that doesn't sound to bad, AMD is 16:1 (FP32:FP64), nvidia is worse than that (64:1). So seems like Apple would be squarely in the middle. Note this is consumer specs.
 
Ah that doesn't sound to bad, AMD is 16:1 (FP32:FP64), nvidia is worse than that (64:1). So seems like Apple would be squarely in the middle. Note this is consumer specs.

There is nothing preventing folks who need more precision to use exactly the same techniques on AMD or Nvidia hardware. In fact, I'd say that using native GPU doubles on consumer hardware is not a smart choice.
 
I'm not sure what that has to do with anything, or with "correlation does not mean causation". The reasons most people buy laptops don't change that most people buy laptops.
It changes why there are fewer people buying 'upgradable' machines.

GPUs optimize towards high performance at the cost of precision.
Not necessarily. As @leman suggested, if you implement everything on your own then yes, you can get less precision. However, there are multiple approaches and libraries meant for high performance while ensuring good to high precision; e.g. GARPREC, StarNEig, etc. The idea that 'parallel is fast but unprecise' is, nowadays, outdated as you can achieve the required precision for probably 99.9% of the cases. Of course, a large chunk of research is still done with supercomputers in mind but things tend to eventually scale down to consumer electronics. Even consumer Matlab offers double precision on CUDA calculations by default ---though, the trend is nowadays to move to mixed-precision systems.

Disclaimer: I am not in GP-GPU research directly, but I have been working with people from HPC on robust ML training. The issue of precision in tensor calculations is quite important when it comes to understanding the sensitivity of different (hyper-)parameters.
 
It changes why there are fewer people buying 'upgradable' machines.

Sure, but the original assertion was "Well that’s one way to look at it. But graphics cards exist for a reason."

Which is true, but the context "yeah, but most people cannot actually use one of them" is important. Even if you bring up eGPUs, most laptops can't do that. Apple's current line-up sadly can't, and for other vendors, most laptops don't even have Thunderbolt at all.

The idea that 'parallel is fast but unprecise' is, nowadays, outdated as you can achieve the required precision for probably 99.9% of the cases.

Fair.

But, while more and more code can use GPUs, I assure you there continues to be a lot of software in many areas that cannot make meaningful use of them. Most tasks consumers do don't benefit from parallelization, and most tasks enterprises do continue to be of the CRUD type — the CPU is rarely the bottleneck, and the algorithms are so simple that parallelization would often introduce latency instead of improving throughput.

There are certainly scenarios where "throw more cores at it" or "move it from CPU to GPU" are a great option, but there is still tons and tons of code written today where the use case doesn't lend itself well to those approaches, no matter how you implement it.

 
Even if you bring up eGPUs, most laptops can't do that. Apple's current line-up sadly can't, and for other vendors, most laptops don't even have Thunderbolt at all.
Not that Apple's current lineup "can't", they have the ability to use eGPU.

It's that the macOS on Apple SoC doesn't have the drivers.

The hardware can use eGPU, it's purely a software limitation.
 
Not that Apple's current lineup "can't", they have the ability to use eGPU.

It's that the macOS on Apple SoC doesn't have the drivers.

The hardware can use eGPU, it's purely a software limitation.

Well, the end result is the same — you buy an ARM Mac, and it can't use an eGPU. Whether that's a SoC limitation, a firmware one or a driver one doesn't really matter.

(I'm guessing it has to do with iBoot.)
 
Not that Apple's current lineup "can't", they have the ability to use eGPU.

It's that the macOS on Apple SoC doesn't have the drivers.

The hardware can use eGPU, it's purely a software limitation.
Nope. According the Hector Martin, it is a hardware limitation on the M1 at least as far as anyone knows. From the Asahi Linux main developer, there is a PCIe Base Address Register (BAR) limitation that breaks eGPU drivers. Unless someone at Apple contradicts this, it sounds like a fundamental limitation.

 
Sure, but the original assertion was "Well that’s one way to look at it. But graphics cards exist for a reason."

Which is true, but the context "yeah, but most people cannot actually use one of them" is important. Even if you bring up eGPUs, most laptops can't do that. Apple's current line-up sadly can't, and for other vendors, most laptops don't even have Thunderbolt at all.
It is quite sad. I have a 3080 as an eGPU on my HP X360 1040 G7 and it works great for ML and other scientific computing work. It can even play some games quite well —as long as I use an external monitor connected to the GPU.

I am disappointed by the lack of eGPU support. I was hoping M2 would have that which is partly why I waited until now to order a MBP (I left Apple after they discontinued the 17" and instead were pushing for thin designs with butterfly keyboards over function). I now have a MBP 16" on order but I think the eGPU with my HP will either stay at a corner of my office on go to one of my PhD students...

Fair.

But, while more and more code can use GPUs, I assure you there continues to be a lot of software in many areas that cannot make meaningful use of them. Most tasks consumers do don't benefit from parallelization, and most tasks enterprises do continue to be of the CRUD type — the CPU is rarely the bottleneck, and the algorithms are so simple that parallelization would often introduce latency instead of improving throughput.

Yes and a large chunk of the problem of latency is down to our reliance on von Neumann architecture. At least, a large chunk of the previous bottlenecks is being eliminated now that SoC designs are becoming widespread and, even on 'traditional desktops', technologies such as GPUDirect allow the GPU greater control over other resources.

There are certainly scenarios where "throw more cores at it" or "move it from CPU to GPU" are a great option, but there is still tons and tons of code written today where the use case doesn't lend itself well to those approaches, no matter how you implement it.

On this, I agree. My main complain was the outdated narrative of 'GP-GPU computing is imprecise' ;-)
 
Nope. According the Hector Martin, it is a hardware limitation on the M1 at least as far as anyone knows. From the Asahi Linux main developer, there is a PCIe Base Address Register (BAR) limitation that breaks eGPU drivers. Unless someone at Apple contradicts this, it sounds like a fundamental limitation.


Conceivable that they’ll consider it a “pro” feature that the M2 Pro or M3 Pro gets (and that maybe eventually trickles down to non-Pro)
 
Conceivable that they’ll consider it a “pro” feature that the M2 Pro or M3 Pro gets (and that maybe eventually trickles down to non-Pro)

Why would you think so? Apple has no interest whatsoever in supporting eGPUs. What other external devices require this kind of mapping support?
 
Conceivable that they’ll consider it a “pro” feature that the M2 Pro or M3 Pro gets (and that maybe eventually trickles down to non-Pro)
Or maybe Apple is confident enough in the capabilities of their own M chips that they don't deem e-GPU support necessary. There was a case for it back when the thin form factor of the 2016 intel MBPs meant that Apple had to fall back on less powerful AMD graphics cards to avoid excessive throttling. Now, maybe they just want to go all-in on their own custom silicon and cut third party vendors out altogether?
 
Or maybe Apple is confident enough in the capabilities of their own M chips that they don't deem e-GPU support necessary.

Maybe, but that would be a bummer if so.

There was a case for it back when the thin form factor of the 2016 intel MBPs meant that Apple had to fall back on less powerful AMD graphics cards to avoid excessive throttling.

I'm not sure that was the only reason, but it was a reason, sure.

Now, maybe they just want to go all-in on their own custom silicon and cut third party vendors out altogether?

There are certainly complications that eGPU would introduce. For example, it's unclear what their plan (if any) is regarding third-party GPU drivers. But it doesn't follow that it's unnecessary, just that it's maybe not worth the effort in their opinion.

There will always be high-end GPUs that have more horsepower than an integrated. The only question is: does Apple care enough about that slice of the market.

Like, you could make the same argument about Thunderbolt: why support that at all? USB-C is good enough for almost all users. Dropping Thunderbolt support would simplify various aspects of their software and hardware architecture. Yet, they offer it on all Macs, even the lowest-end Air and mini (but not the discontinued 12-inch MacBook), and even their higher-end iPads.
 
Like, you could make the same argument about Thunderbolt: why support that at all? USB-C is good enough for almost all users. Dropping Thunderbolt support would simplify various aspects of their software and hardware architecture. Yet, they offer it on all Macs, even the lowest-end Air and mini (but not the discontinued 12-inch MacBook), and even their higher-end iPads.
Don't you need thunderbolt speeds to support their studio display? I assume it's there because their most expensive accessory demands it.
 
Says who?

This has been discussed back and forth. There is no indication whatsoever that Apple plans to support eGPUs with Apple Silicon, not to mention that it would totally break their GPU programming model. They have put in a lot of effort to build a set of programming interfaces and hardware assumptions that remain the same across a wide class of device types, do you really thunk they will undo all this just like this?

The only prospect I see for eGPUs on Apple Silicon is via VMs or hacked together solutions like Asahi Linux, but of course this would require that Apple updates their hardware IO to support memory mapping these devices need. And it is not clear to me that they ever will.

You can make the same case about supporting Thunderbolt. Or doing another Mac Pro.

I really don't see how. Apple does ship thunderbolt devices and the means to build drivers for devices connected via PCIe and thunderbolt (just not GPU). And they have advertised the ability of the new Macs to use high-speed external storage or displays. In fact, they have implemented a whole new user space driver model and framework exactly for these kind of applications.

They also made it clear that they want to make an Apple Silicon Mac Pro (which does not mean they will) and they have demonstrated that they are developing scalable high-end workstation-class processors.

These things actually exist and are supported today. Unlike eGPUs, which are not supported on Apple Silicon, where we have no evidence that Apple plans to offer any support and where Apple engineers went on the record multiple times stating that Apple Silicon computers ship with UMA and Apple GPUs.
 
Apple does ship thunderbolt devices and the means to build drivers for devices connected via PCIe and thunderbolt (just not GPU).

This is a circular argument. "They do it, therefore, they do it."

They also made it clear that they want to make an Apple Silicon Mac Pro (which does not mean they will)

Yes, and they have previously introduced eGPU support, and for now they haven't ported it to ARM.

Unlike eGPUs, which are not supported on Apple Silicon, where we have no evidence that Apple plans to offer any support and where Apple engineers went on the record multiple times stating that Apple Silicon computers ship with UMA and Apple GPUs.

I still don't see the Mac Pro doing UMA. It would be a dumb, pointless product. Are you saying it will only have integrated memory? And that it will have PCIe slots, but not for GPUs? Or that it won't have PCIe slots at all? What's the point of any of that?
 
I still don't see the Mac Pro doing UMA. It would be a dumb, pointless product. Are you saying it will only have integrated memory? And that it will have PCIe slots, but not for GPUs? Or that it won't have PCIe slots at all? What's the point of any of that?
There was some speculation on these forums that Apple showed a prototype of the Apple silicon Mac Pro to some of their target market and got a negative response. Hence we are still waiting for an ASi Mac Pro because no one would buy one as designed. I have no idea if that speculation is correct but it would explain the delay.
 
This is a circular argument. "They do it, therefore, they do it."

It's the only argument. We are talking about the likelihood of shipping feature X. Well, they are shipping Thunderbolt and were committed to it from day one. Nothing to discuss. I don't see how one can argue that Apple is not interested in Thunderbolt without descending in pure absurdity.


Yes, and they have previously introduced eGPU support, and for now they haven't ported it to ARM.

They have introduced eGPU support because both the hardware (GPU-compatible PCIe controllers) and the software (GPU drivers) were already there. They only needed to add basic hot-swap support for GPUs and voila.

On Apple Silicon we don't have the hardware (as Apple's controllers apparently don't support the relevant modes, either because of security reasons or because Apple doesn't care), we don't have the software (no third-party GPU drivers or indications they they will ever be supported), not to mention that we have a different GPU programming model that Apple is interested in enforcing though their entire ecosystem for obvious reasons.

My position on this has always been very clear: no third-party GPUs for Apple Silicon, ever. I just don't see them going that way. It would undermine all the work they have been pouring into Metal and their in-house GPUs.


I still don't see the Mac Pro doing UMA. It would be a dumb, pointless product. Are you saying it will only have integrated memory? And that it will have PCIe slots, but not for GPUs? Or that it won't have PCIe slots at all? What's the point of any of that?

I believe it will have UMA and integrated memory only, and that it will have PCIe slots but not for GPUs. I also believe that in order to be a successful product it would either need to use an SoC that is so much faster than anything else that third-party GPUs would become pointless in principle; or, use a modular NUMA architecture where you can use multiple compute boards (each with their own SoC/CPU/GPU/integrated RAM) + shared extendable RAM. But that's a whole other can of worms and it's not clear to me that Apple will go there.

Either way, they will do just fine even without a Mac Pro. Maybe we will see Apple abandon the high-end desktop market altogether. Who knows.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.