Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I just don’t believe that Apple’s
GPU on the next mac mini is going to be particularly competitive with a Radeon 5700XT (or whatever comes out this year) in an external GPU. And I think that is likely to hold true in the next iMac, and a near certainty in the next Mac Pro.
I'm sceptical as well, but Apple is very clear that they are seeing their GPUs as competitive to at least current, if not future dedicated GPUs from third parties.
 
Do you think ARM Macs won't have AMD GPUs but will have custom, integrated and/or ARM GPUs?

I would guess (even without those WWDC slides) that the first round of production ARM Macs will exclusively use integrated A-series GPUs - and they'll be replacing the Intel iGPU-based machines that are (a) the biggest-selling models and (b) the ones currently being held back the most by Intel chips.

We'll probably have to wait until the later half of the "2 year transition" to see whether there are going to be "pro" machines with discrete GPUs. By then we'll have see how Apple's laptop/desktop silicon performs and will maybe have stopped worrying.

I wouldn't fall off my chair in shock if - when the transition is complete - the only Mac with discrete GPUs is the Mac Pro, and that will probably be the last Mac to go ARM, if at all. However, since every other Mac "discrete" GPU is built in to the mainboard anyway the only practical upshot for us users will be how well the new chips perform. The fact that even the A12Z dev system clearly doesn't suck - if you bear in mind that it was designed for iPad power constraints - is hopeful there.

I wouldn't read too much into slides at WWDC talks - at least when it comes to predicting the future. I'm sure they are technically correct, but they're still only what Apple thinks "unprivileged" developers need to know right now.
 
  • Like
Reactions: Haeven and leman
I watched this session now and what you are referring to is the tile memory on a TBDR architecture. This is not VRAM in the strict sense of the word (more like a cache). Also, all the discussion in the video (including the compatibility mode) is about developers writing buggy code - basically Apple is choosing defaults that would work around common API misuse. This misuse might not be apparent on an AMD GPU, but will show up on an Apple GPU side the later gives you more fine grained control over GPU memory.

But all in all, they have given no indication of dedicated VRAM. Only talked about features iPhone GPUs have had for years.

Regarding the second part of that; Yes, it's to mitigate wrong coding practices that were invisible on prior GPUs.

Regarding the first point; I guess I had gotten the wrong end of the stick on that. Should have paid more attention when I watched it originally

Regardless however, my prediction is still that there will be some form of either VRAM or something similar to Crystal Well's lvl 4 cache
Cheers
 
I wouldn't expect the 6000 series nor anything that powerful. See 16-inch Macbook Pro and the 5XXXM series update. That's Navi and likely the extent Apple will care to update due to inventory control (they recycle the same graphics options on high-end laptop and desktop). iMacs use mobile chips. iMac Pro might use this but I think that line is a dead-end now that they have the Mac Pro.

I'm kindof torn on what to believe. Some rumors say Navi and some rumors say Navi 2x.

I think Navi 2x is a possibility because the new architecture makes it very efficient (power consumption and heat). It would be easy to incorporate it into iMac.
[automerge]1593352173[/automerge]
There will only be integrated Apple GPUs.

rpKPYSI.png


Source

I'm hoping they will be as powerful as AMD's gpus.

Edit: Even with some rumors, I hope Apple will announce the release of the upcoming 2020 Intel iMac soon.
 
Last edited:
I'm kindof torn on what to believe. Some rumors say Navi and some rumors say Navi 2x.
I think Navi 2x is a possibility because the new architecture makes it very efficient (power consumption and heat). It would be easy to incorporate it into iMac.

I don't see them using Big Navi unless the 6000 series has some unannounced mobile chips that Apple thinks will be a fitting capstone to the end of the Intel Macs. I say the 5XXX because they're already shipping those now and the mainstream iMacs have generally shared the same mobile GPU on the base configurations. Would need another MBP refresh to keep supply parity. These are incremental upgrade options that are right now and fully refreshed with the recent 5600M HBM2 option.

Maybe Big Navi on the iMac Pro. But I don't know if they're vested in maintaining that model. Vega 56/64 is power hungry and on what feels like ancient 14nm so definitely would be a nice generational upgrade there.

Actually, of course they could always do incremental updates as needed whenever. Drag and replace so long as Intel has options for whichever end-of-line socket is in the last machine and an discrete AMD option fits thermally.
 
My speculation is that it might make sense for 16” system, which is already more expensive. Saving from simplifying the overall architecture as well as not having to rely on IHVs might make it cost efficient - and it would sufficiently discriminate the 16” from the rest of the line. But that is my wishful thinking of course.
I watched this session now and what you are referring to is the tile memory on a TBDR architecture. This is not VRAM in the strict sense of the word (more like a cache). Also, all the discussion in the video (including the compatibility mode) is about developers writing buggy code - basically Apple is choosing defaults that would work around common API misuse. This misuse might not be apparent on an AMD GPU, but will show up on an Apple GPU side the later gives you more fine grained control over GPU memory.

But all in all, they have given no indication of dedicated VRAM. Only talked about features iPhone GPUs have had for years.
Regarding the second part of that; Yes, it's to mitigate wrong coding practices that were invisible on prior GPUs.

Regarding the first point; I guess I had gotten the wrong end of the stick on that. Should have paid more attention when I watched it originally

Regardless however, my prediction is still that there will be some form of either VRAM or something similar to Crystal Well's lvl 4 cache
Cheers

I just started the WW session titled "Exploring the New System Architecture of Apple Silicon Macs". Early on in that talk they actually say that their chips will combine VRAM and system RAM in a unified architecture
[automerge]1593368355[/automerge]
The video I mentioned just before says all of this will become just one chip:

Screenshot 2020-06-28 at 20.18.35.png
 
  • Like
Reactions: UltimateSyn
I would guess (even without those WWDC slides) that the first round of production ARM Macs will exclusively use integrated A-series GPUs - and they'll be replacing the Intel iGPU-based machines that are (a) the biggest-selling models and (b) the ones currently being held back the most by Intel chips.

We'll probably have to wait until the later half of the "2 year transition" to see whether there are going to be "pro" machines with discrete GPUs. By then we'll have see how Apple's laptop/desktop silicon performs and will maybe have stopped worrying.

I wouldn't fall off my chair in shock if - when the transition is complete - the only Mac with discrete GPUs is the Mac Pro, and that will probably be the last Mac to go ARM, if at all. However, since every other Mac "discrete" GPU is built in to the mainboard anyway the only practical upshot for us users will be how well the new chips perform. The fact that even the A12Z dev system clearly doesn't suck - if you bear in mind that it was designed for iPad power constraints - is hopeful there.

I wouldn't read too much into slides at WWDC talks - at least when it comes to predicting the future. I'm sure they are technically correct, but they're still only what Apple thinks "unprivileged" developers need to know right now.
Check out the post above. The path forward for Apple Silicon is with CPUs and GPUs on one system on a chip.

"Intel-based Macs contain a multi-core CPU and many have a discrete GPU ... Machines with a discrete GPU have separate memory for the CPU and GPU. Now, the new Apple Silicon Macs combine all these components into a single system on a chip, or SoC. Building everything into one chip gives the system a unified memory architecture. This means that the CPU and GPU are working over the same memory."

 
  • Like
Reactions: Moonjumper
Check out the post above. The path forward for Apple Silicon is with CPUs and GPUs on one system on a chip.

"Intel-based Macs contain a multi-core CPU and many have a discrete GPU ... Machines with a discrete GPU have separate memory for the CPU and GPU. Now, the new Apple Silicon Macs combine all these components into a single system on a chip, or SoC. Building everything into one chip gives the system a unified memory architecture. This means that the CPU and GPU are working over the same memory."


hopefully they can pull off some magic. It’s hard to believe they will be able to match dgpu performance with igpu.
 
  • Like
Reactions: Darajavahus
It’s hard to believe they will be able to match dgpu performance with igpu.

If memory bandwidth is not a problem, the difference between iGPU and dGPU disappears. Apple GPUs is already less reliant on RAM speeds than Nvidia/AMD, if they solve the rest of the technical issues it is not unrealistic for them to match higher end of the mid-range performance. True high-end is probably going to be difficult for an SoC however.
 
  • Like
Reactions: Macintosh IIcx
What if this is the scenario for the first couple of Apple silicon macs.

13" macbook
mac mini
imac 24"

They use the Apple GPU since those models use integrated graphics anyway, and we STILL don't know what / if apple will use on their macbook 16", imac 27" , imac pro and of course mac pros.

I could totally see all of us still in the dark on this subject all the way into june of next year. But i hope not obviously.
[automerge]1593704678[/automerge]
[automerge]1593704719[/automerge]
 
We will probably see a mix of integrated and discrete. The lower end models will probably be integrated, while higher end ones with higher thermal envelopes will most likely have discrete as at least an option. Personally, I am hoping for a 13" MBP with a dGPU, as ARM should use less power, thus freeing up the thermal budget for a GPU. Hopefully there will also be the switchable graphics like we have now.

That won't happen for the sole reason Apple's A GPUs are way more powerful than Intel iGPUs, and we've only had iGPUs in the 13"s since 2010. Just a brief example: the A12Z (tablet SoC) scores 50% higher in OpenCL than Intel Iris Plus (in laptop SoCs), and the A14 variant used for Macs will be way beefier in both CPU and GPU than the A12Z.

So if even when it made sense due to having more thermal and energy room, I don't see Apple now relying in a 3rd party competitor when the main reason to go ARM was to have 100% control of their resources.

In high end computers like Mac Pro or even MBP 16"? Sure, I don't see A-GPUs strong enough yet, but that will just be the high end of the line.
 
So if even when it made sense due to having more thermal and energy room, I don't see Apple now relying in a 3rd party competitor when the main reason to go ARM was to have 100% control of their resources.

Exactly. Apple just took control of their future, and they are going to do everything they can to not be in another situation like they are with NVIDIA or have to WAIT for a vendor like AMD to ship radeons.

I think the GPU question is the biggest question to come out of WWDC IMO.
 
  • Like
Reactions: Moonjumper
That won't happen for the sole reason Apple's A GPUs are way more powerful than Intel iGPUs, and we've only had iGPUs in the 13"s since 2010. Just a brief example: the A12Z (tablet SoC) scores 50% higher in OpenCL than Intel Iris Plus (in laptop SoCs), and the A14 variant used for Macs will be way beefier in both CPU and GPU than the A12Z.

So if even when it made sense due to having more thermal and energy room, I don't see Apple now relying in a 3rd party competitor when the main reason to go ARM was to have 100% control of their resources.

In high end computers like Mac Pro or even MBP 16"? Sure, I don't see A-GPUs strong enough yet, but that will just be the high end of the line.

+1

No doubt in my mind that Apple will continue to invest in the performance of their own GPUs and will avoid to the extent possible having a dGPU on most Macs.

The solution for MBP16 and iMac/Mac Pro is anyone’s guess. But for these Macs there’s still the possibility of an Apple Silicon dGPU if you take into account Apple R&D budget over the last two years
 
That won't happen for the sole reason Apple's A GPUs are way more powerful than Intel iGPUs, and we've only had iGPUs in the 13"s since 2010. Just a brief example: the A12Z (tablet SoC) scores 50% higher in OpenCL than Intel Iris Plus (in laptop SoCs), and the A14 variant used for Macs will be way beefier in both CPU and GPU than the A12Z.

So if even when it made sense due to having more thermal and energy room, I don't see Apple now relying in a 3rd party competitor when the main reason to go ARM was to have 100% control of their resources.

In high end computers like Mac Pro or even MBP 16"? Sure, I don't see A-GPUs strong enough yet, but that will just be the high end of the line.
Ok, that makes sense for the low power 9W MBA and 15W MBP Base model, but what about the higher wattage 28W MBP13 and the MBP16, and of course the desktops? Are the simply going to increase the clock speed on whatever chip they use? After a while, it does not make sense to keep increasing the clock speed, as you hit a problem of exponentially higher power consumption. Intel tried this with the Pentium 4, and in pursuit of higher clock speeds on an architecture with a relatively low IPC (NetBurst), and the end result was a space heater. More cores means more chips to develop, unless they just make one monster chip (tons of CPU+GPU cores) and disable cores and underclock as needed in the lower power models. But that is very expensive to develop and manufacture, and its even more expensive to design multiple variants with multiple core counts and caches, and so I think we will see discrete GPUs on the high end models.

Apple chooses whats best for Apple, and if that means ARM CPUs and AMD/NVIDIA GPUs, then I'm sure that is what they'll choose. Long term, I expect we'll see Apple GPUs in Macs across the lineup, because, as you said, Apple wants control.
 
I wonder if thunderbolt 3/4 will still support egpu devices, assuming thunderbolt support continues with the Apple silicon devices?

Reading this thread, it almost appears apple at least wants to remove the concept of video ram, perhaps it will still work but only macOS, vs. the programmer of the app will decide how to allocate video ram if an egpu is attached.

Hopefully the situation will be clarified soon, I would hate to invest in a thunderbolt egpu just to find it obsolete in a couple of years.
 
Ok, that makes sense for the low power 9W MBA and 15W MBP Base model, but what about the higher wattage 28W MBP13 and the MBP16, and of course the desktops? Are the simply going to increase the clock speed on whatever chip they use? After a while, it does not make sense to keep increasing the clock speed, as you hit a problem of exponentially higher power consumption.

I'm confused if ur talking about GPUs or CPUs here, but 28W MBP's (13" basically) have never had dGPUs since 2010, they won't add one now. As I said Apple's GPUs are more powerful than the Iris Pro's that the MBP 13" and MBA were carrying now.
On iMacs: the 21" has an intel iGPU, while the next one has a 2GB AMD's 555 GPU which is only 19% faster in OpenCL than the A12Z. Except for the top end iMacs, again it makes no sense to go for dGPUs.

The thing about power consumption is the following: to keep it simple, general purpose processors (CPU/GPU) are great for doing general purpose tasks (serial tasks in CPU/rasterizing + raytracing more recently, in GPU), but the moment you try to bruteforce every task with high clocks makes them really inefficient. In ARM you can design "little ASICS" (to say it simple) inside the SoC, so instead of inefficiently resolving stuff with high clocks you resolve it in specific hardware which is way more efficient (or more powerful per watt wasted). That's why an iPhone's 5.0TPU NPU, which is just a small part of the silicon (and about 2W), is on par with 150-200W Nvidia GPUs when it comes to AI.
So thay's what they'll do, they won't aim for 2GHz GPUs like AMD/Nvidia, but for instance will add a VPU will deal with machine vision and image processing way more efficiently than a whole GPU using generic CU's (i.e a GTX using CUDA). That's the reason Nvidia (and soon AMD) add specific Tensor Cores to deal with Raytracing, because they are way more efficient and suited for the task than the traditional graphic cores.

Apple chooses whats best for Apple, and if that means ARM CPUs and AMD/NVIDIA GPUs, then I'm sure that is what they'll choose.

Indeed, and that's why they won't go for discrete 3rd party GPUs in most of their line: because one Apple SoC costs them 60 bucks and it has the CPU + GPU, a single 3rd party discrete GPU costs already more than that. The only products that will have AMD GPUs (because Apple definitely cut with Nvidia) are products that professionals are in need for them: Mac Pro and probably MBP 16".
 
  • Like
Reactions: Azrael9 and Tekguy0
Apple seemed to state categorically that Apple Silicon Macs will have an "Integrated GPU that shares its memory with the system". It's important to mention that their implementation of this idea is not at all the same as Intel's Integrated Graphics Processors and shouldn't be synonymous with "weaker performance" like Intel's is.

That said, I can't fathom anything that would outright stop Apple from adding an AMD GPU into the mix other than heat dissipation and power consumption. Not sure if it's necessary, but if it is deemed necessary, I can't see why it wouldn't still be something they'd have as an MPX module for Mac Pros or an eGPU.

Not sure you'd see it on an ARM-based 16" MacBook Pro, but then again, Apple's own GPU architecture might render the need for AMD GPUs (or NVIDIA GPUs for that matter) obsolete. With everything being Metal (and major software titles already coming to the table with Metal support), Apple is king.
 
I think we have to reconsider what a GPU actually does, while keeping it's prime use case in focus. GPUs have nowadays outgrown their original purpose of rendering and accelerating an operation systems GUI. While this keeps being true for gaming a GPUs potential now lies within computing. I summarised my thoughts on this topic here. (These threads should merge).
 
I think we have to reconsider what a GPU actually does, while keeping it's prime use case in focus. GPUs have nowadays outgrown their original purpose of rendering and accelerating an operation systems GUI. While this keeps being true for gaming a GPUs potential now lies within computing. I summarised my thoughts on this topic here. (These threads should merge).

GPUs have been used for compute for a long time already, and Apple's GPUs in particular are quite good at it. But graphics is important too. In fact, graphics is where Apple GPUs shine. They use a very different basic architecture than mainstream GPUs which allows them to do graphical tasks more efficiently with less reliance on memory performance.
 
True. This is exactly why I think Apple GPUs continue to excel in graphical tasks but will offload certain compute tasks onto dedicated apple compute cards rather than using third brand dGPUs - at least on enterprise level pro devices.
 
I'm really excited about this aspect I really don't need much more processing power (that would be great but it isn't like chips are slow) I really need better graphical performance without the fans spinning up all of the time so I can edit photos or use external monitors so this solution of having the CPU and GPU integrated with shared memory and that kind of power/heat sounds perfect for me and then I agree I think Apple makes a dGPU that can either be used in a mac pro or as a eGPU for anything else.

I also think it's going to be really interesting how Apple is able to incorporate these specialist chips like the neural processor or security chips for offloading specialized tasks. It seems like we're going from one massive chip to a system of different cores and chips with different specialities working together. It's something that Apple is uniquely positioned to try and tackle and they already have so much experience doing this with a phone it will be interesting to see how this scales up and what kinds of advantages it can create with laptop and desktop power. Apple can really draw this entire thing up from scratch and do things the way they want to from the ground up and I think it's going to be a leap forward.
 
Last edited:
  • Like
Reactions: macsplusmacs
I still don’t see Apple’s SoC GPU can match a heavy duty highend graphic cards offered by nVidia and AMD.
They easily take 250w for it, and no SoC can handle that much heat efficiently with CPU together unless you go exotic on cooling. For mobiles and mac mini that kind of stuff, I see SoC being successful, but for highend imac and mac pro, I doubt it. I’m even dreadful because I sense Apple giving up on that sector. Or, try their futile effort doing their own SoC way in professional machine like what they did with MP 6,1 combined cooling stuff.
 
I still don’t see Apple’s SoC GPU can match a heavy duty highend graphic cards offered by nVidia and AMD.
They easily take 250w for it, and no SoC can handle that much heat efficiently with CPU together unless you go exotic on cooling. For mobiles and mac mini that kind of stuff, I see SoC being successful, but for highend imac and mac pro, I doubt it. I’m even dreadful because I sense Apple giving up on that sector. Or, try their futile effort doing their own SoC way in professional machine like what they did with MP 6,1 combined cooling stuff.

I agree. Apple GPUs (based on what we know) would be able to deliver excellent performance in the 50W bracket, possibly competing with 100+ Watt cards from other vendors, but the limitations of a SoC will stop them from getting any faster. If they will ever want to go for the high-end segment (which I am not sure they will), they would have to detach the GPU into its own chip.
 
I agree. Apple GPUs (based on what we know) would be able to deliver excellent performance in the 50W bracket, possibly competing with 100+ Watt cards from other vendors, but the limitations of a SoC will stop them from getting any faster. If they will ever want to go for the high-end segment (which I am not sure they will), they would have to detach the GPU into its own chip.
Is there a particular reason why Apple SoC cannot operate at 250W if Intel/AMD/NVIDIA GPU can? I think not. Apple SoC for Mac will not be built with all day battery life as a key feature.

As Rastafabi pointed out in another thread, GPU tasks on Apple SoC are split into dedicated processors for machine learning and encoding video and maybe soon raytracing. For Mac Pro, there is already a dedicated video processing card in the form of the After burner. We may see more of these dedicated cards for instance for ML leaving Apple SoC to handle "just" the true graphics. Everything is about efficacy and splitting the general purpose GPU/CPU into dedicated silicon for separate functions seem to be a viable way to be explored.

In that contexts, it is interesting how the Mac pro is constructed. The introduction of the MPX modules (which do very little for traditional GPU packaging) looks like an excellent package for a high power dedicated Apple Silicon ML modules or other functions.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.