Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As an example of current M1 benchmarks:

In Octane Render
An M1 takes 101 seconds
A 5700XT 21 seconds

So Apple really need to up the GPU specs for macproAS.

That’s actually a really impressive result. M1 is a 10 watt GPU that uses 128-bit LPDDR4 with bandwidth close to 60GB/s. The 5700XT is a 225W GPU with memory bandwidth of 448 GB/s... one would expect a much larger difference than 5x...
 
The truth is that this is a conundrum. I can’t imagine Apple hadn’t considered this issue since day one, and it’s early days yet for the transition, so I would suspect they have an answer- either a Mac Pro with multiple GPU options, or some kind of beefed up m-whatever chip with improved graphics many times over.

I have no clue what they’re planning- I think folks like the possibility of slotting in whatever GPU they think they need- the afterburner for 3D modeling, and the 580 for more CPU-heavy workflows, with a couple options in between.

I feel like Apple must have a solution for this, but I could be wrong. I feel like not having a scalable GPU architecture, be it cards with additional AS GPU cores, or a more traditional GPU framework, would be a major misstep for Apple, but it’s not unthinkable.
 
The truth is that this is a conundrum. I can’t imagine Apple hadn’t considered this issue since day one, and it’s early days yet for the transition, so I would suspect they have an answer- either a Mac Pro with multiple GPU options, or some kind of beefed up m-whatever chip with improved graphics many times over.

I have no clue what they’re planning- I think folks like the possibility of slotting in whatever GPU they think they need- the afterburner for 3D modeling, and the 580 for more CPU-heavy workflows, with a couple options in between.

I feel like Apple must have a solution for this, but I could be wrong. I feel like not having a scalable GPU architecture, be it cards with additional AS GPU cores, or a more traditional GPU framework, would be a major misstep for Apple, but it’s not unthinkable.
That’s my concern too as if you look at their recent track record on where all the focus goes, it is the consumer, as they are a consumer business.
 
As an example of current M1 benchmarks:

In Octane Render
An M1 takes 101 seconds
A 5700XT 21 seconds

So Apple really need to up the GPU specs for macproAS.
To me, it doesn't matter what the benchmarks say. It matters how it plays out in the real world. I have a complex 3D animation in Blender that used to bring my RX 5700 XT to its knees, playing back at maybe 5 fps. On my M1 mini, the scene runs nearly in real time (about 23 fps). I'm perfectly happy with that.
 
To me, it doesn't matter what the benchmarks say. It matters how it plays out in the real world. I have a complex 3D animation in Blender that used to bring my RX 5700 XT to its knees, playing back at maybe 5 fps. On my M1 mini, the scene runs nearly in real time (about 23 fps). I'm perfectly happy with that
That’s good to know. Probably has a lot to do with drivers and optimisations as well.
I use Rhino and still not native to M chips. Am thinking of looking into blender more or C4D because of this.
 
Examples:
  • Apple GPUs have unified memory, eGPUs by definition do not
  • Apple GPUs are TBDR devices, third party GPUs are not
  • Apple GPUs offer certain performance guarantees, third party GPUs are not
  • Metal is designed for Apple GPUs and full range of Metal features is only available on Apple GPUs, third party GPUs only support a subset of those features

All these features (or different feature sets) are still supported in Metal on Apple Silicon, even if the hardware isn't there. It's false that this is incompatible with Apple Silicon. It's just a driver problem. Metal on Apple Silicon can still even do traditional, non tile based rendering.

Even different Apple Silicon GPUs may have different features. So there is no "single feature set" on Apple Silicon to start with.
 
Last edited:
All these features (or different feature sets) are still supported in Metal on Apple Silicon, even if the hardware isn't there.
It's false that this is incompatible with Apple Silicon. It's just a driver problem.

It's all about the least common denominator. For example, Unified Memory gives me explicit zero-copy support as well as low-latency CPU/GPU synchronization, while a traditional dGPU does not (and no, it's not a "driver problem"). I can always treat an UMA-based system as a system with a separate GPU memory pool, but not the other way around. So if I develop, let say, a professional app that utilizes the GPU, and I want to achieve the widest hardware compatibility, I will develop my app without relying on UMA. Now, that's fine on Intel-based machines, since only Intel has UMA (and that fact is not exposed by Metal anyway), but it's definitely not fine for Apple Silicon based Macs where utilizing UMA can be a major performance and efficiency win. Same for games and Apple TBDR features.

As I've written before, a decisive advantage of Apple Silicon is a unified GPU programming model. You know that you can make certain fundamental assumptions (that are very different from other GPUs!) and you can design your software to take advantage of them, knowing that it will run well on anything from an iPhone to a Mac Pro.

Metal on Apple Silicon can still even do traditional, non tile based rendering.

This is simple inorrect. Apple Silicon GPUs are TBDR devices and all they ever do is TBD rendering. It's just from the API perspective you can treat them as a forward renderer and never know a difference, since they will produce the same end result (unless you mess up your API use as many games do). But you can achieve much improved performance and efficiency by explicitly utilizing TBDR-specific features.

Even different Apple Silicon GPUs may have different features. So there is no "single feature set" on Apple Silicon to start with.

Again, this is all about the least common denominator. Yes, Apple Silicon GPUs will support different feature sets, but they still have a large common core. And a common core allows you to make certain assumptions that can absolutely change how you design your app. There is a big difference between feature sets aka. "this GPU supports some additional intrinsic that let's me do matrix multiplication faster and that one does not" and "this GPU has unified memory and persistent shared GPU cache and that one does not".
 
Last edited:
That’s always been the goal since the start of Apple hasnt it? Fully integrated hardware and software.

I think it is great for most users and totally support it. However probably not so great for some of my professional work but thats fine, windows does a great job there. Right tool right job.
Steve Jobs' project "Sand". Silicon "sand" in one end of the plant, computers come out the other. Everything under Apple control. All secretively made. All as Apple wants it.
 
Bottom line is no one knows except the engineers at Apple. The rest of us will find out when Apple decides they're really to tell.
 
No Mac can use Nvidia GPUs on macOS because there are no drivers for those. You can use some AMD GPUs. But only on Intel Macs. Nvidia cannot release drivers anymore because Apple doesn't want to sign them.

If you use Bootcamp on an Intel Mac, you can also use Nvidia eGPUs, but not for macOS.

M1 macs cannot use any eGPU at all right now and I wouldn't bank on them to get this feature.
 
You're saying M1 Macs don't support eGPUs because of patents? Where did you get this info?
What is an eGPU patent? A GPU is just a PCIe card. You connect it, add some drivers, and you're done. M1 Macs have the drivers but they weren't recompiled for ARM.
they can't actually put pcie lanes from the arm chip to thunderbolt controller unless they make theirs version of fpga chip of thunderbolt for arm, but additional computing will be needed especially in recognition of third party hardware, which apple will not do it.
 
I have a PC wiht a RTX 3080 GPU. currently use the 16 inch MBP but know will switch to M1. Does it make sense to get an eGPU enclosure to render and FX in FCP with the 3080? Am I going to get way better performance?
Apple hasnt supported nividia for a very long time
 
Seeing how much emphasis Apple put on advertising UMA on Apple Silicon I just don’t see them not having it on the Mac Pro as well. Whether it will be one large SoC or multiple chips interconnected by a shared cache/memory die (like AMD does) is a technical question.

P.S. to be 100% clear, this is just my opinion. I might very well be wrong.
they can't actually put pcie lanes from the arm chip to thunderbolt controller unless they make theirs version of fpga chip of thunderbolt for arm, but additional computing will be needed especially in recognition of third party hardware, which apple will not do it.
 
they can't actually put pcie lanes from the arm chip to thunderbolt controller unless they make theirs version of fpga chip of thunderbolt for arm, but additional computing will be needed especially in recognition of third party hardware, which apple will not do it.

What do you mean? Already the M1 has PCIe lanes connected to the thunderbolt controller...
 
they can't actually put pcie lanes from the arm chip to thunderbolt controller unless they make theirs version of fpga chip of thunderbolt for arm, but additional computing will be needed especially in recognition of third party hardware, which apple will not do it.
M1 Mac Thunderbolt ports work with existing Thunderbolt devices. Thunderbolt displays and docks and enclosures have USB controllers, Ethernet controllers, NVMe controllers, SATA controllers, etc. All these controllers are PCIe.
M1 Mac Thunderbolt ports work with most PCIe devices. The driver needs to exist in macOS or a third party can write a driver.
M1 Mac Thunderbolt ports can detect eGPU. A GPU in an eGPU is a PCIe device.
M1 Mac Thunderbolt ports can use USB controller of Radeon Pro W5700 installed in an eGPU case. The USB-C port of the W5700 can do USB stuff but display stuff requires the AMD driver to work.

If an eGPU can't work for displays on M1 Macs, I don't see why they couldn't at least be used for compute tasks. Since macOS drivers don't exist for the eGPU, nothing is stopping AMD or Nvidia from making a compute driver. We could have CUDA on GTX 3080 for example.
 
Apple is stopping them. Apple doesn’t want 3rd party GPU drivers. Nvidia used to release custom drivers for Mac but then Apple stopped signing them.
So it’s up to Apple alone whether we will get eGPU support on AS Macs or not.
 
Apple is stopping them. Apple doesn’t want 3rd party GPU drivers. Nvidia used to release custom drivers for Mac but then Apple stopped signing them.
So it’s up to Apple alone whether we will get eGPU support on AS Macs or not.
I didn't say GPU driver. Not for displays. For compute.
 
I didn't say GPU driver. Not for displays. For compute.
This is still a GPU driver. And why would AMD or Nvidia want to make a compute only driver for M1 macs, which are a tiny niche compared to the market they (Nvidia and AMD) already have?

And the niche gets even smaller when you consider how few people would actually buy an eGPU if they could.

And don't forget that all their work in that regards could be ended by Apple in a split second when they decide that these drivers are no longer "allowed" on macOS.

This is simply a bad business decision. It would be nice for the few users that want those. But a bad business decision for Nvidia. And as Apple, Nvidia is a for-profit company.
 
This is still a GPU driver. And why would AMD or Nvidia want to make a compute only driver for M1 macs, which are a tiny niche compared to the market they (Nvidia and AMD) already have?

And the niche gets even smaller when you consider how few people would actually buy an eGPU if they could.

And don't forget that all their work in that regards could be ended by Apple in a split second when they decide that these drivers are no longer "allowed" on macOS.

This is simply a bad business decision. It would be nice for the few users that want those. But a bad business decision for Nvidia. And as Apple, Nvidia is a for-profit company.
Agreed. I didn't say they would or should, only that they could.

A third party developer could do some AMD or Nvidia stuff with the Linux open source drivers. Do those drivers include compute stuff? You couldn't do macOS native display stuff (but maybe you could do virtual display stuff like DisplayLink does).

Apple ending stuff in a split second has always been an issue with developing for macOS. The type of stuff we're talking about here is PCIe device driver. These still exist in macOS. Thunderbolt is used to connect PCIe devices.
 
Agreed. I didn't say they would or should, only that they could.

A third party developer could do some AMD or Nvidia stuff with the Linux open source drivers. Do those drivers include compute stuff? You couldn't do macOS native display stuff (but maybe you could do virtual display stuff like DisplayLink does).

Apple ending stuff in a split second has always been an issue with developing for macOS. The type of stuff we're talking about here is PCIe device driver. These still exist in macOS. Thunderbolt is used to connect PCIe devices.

I doubt that a generic PCIe driver will do the job, is the interface equipped to efficiency deal with large data transfers needed for high-performance GPU compute? Apple has a separate driver type for storage devices, which makes me think that GPUs would need one too.
 
That’s actually a really impressive result. M1 is a 10 watt GPU that uses 128-bit LPDDR4 with bandwidth close to 60GB/s. The 5700XT is a 225W GPU with memory bandwidth of 448 GB/s... one would expect a much larger difference than 5x...
True, but at the end of the day, it doesn’t matter if it’s more efficient, you need that brute force for renders. If it takes 50 minutes to export a project on deadline instead of 10 minutes, it won’t help if you tell the client “But it IS more efficient.”

The big question is how long it will take for M1 architecture to be able to push that brute force that discreet GPUs currently give professional users.
 
True, but at the end of the day, it doesn’t matter if it’s more efficient, you need that brute force for renders. If it takes 50 minutes to export a project on deadline instead of 10 minutes, it won’t help if you tell the client “But it IS more efficient.”

The big question is how long it will take for M1 architecture to be able to push that brute force that discreet GPUs currently give professional users.

You are absolutely right. What I was trying to say that with this level of power efficiency we just might be able to see a lot of GPU power in a unprecedentedly compact package. Of course, it all depends on what Apple intends to deliver.
 
  • Like
Reactions: Flabasha
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.