Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well Apple made references to "longer battery life" with their processors, so it might be acceptable to them to market a computer giving remarkable energy efficiency with no real speed improvements

And for machines like the MacBook Air and the rumoured replacement for the 12" MacBook - which are sold for their ultra-portability and long battery life (and are already outperformed by the iPad Pro) - that's a perfectly reasonable design decisions (...although I do wonder if Apple deliberately set a low bar for the ARM MacBook by leaving that heatpipe off the last version of the Air...)

Even with the iMac, there are plenty of people working in video and audio production who'd be delighted to have 2020-level performance if it came without 2020-level fan noise.

Let's face it - if you want the best GPU performance available you buy a PC and pocket the copious change. That's been true since some time in the 1990s (...even when you could plug PCIe cards into your Mac Pro Classic, the drivers weren't always up to snuff).
 
  • Like
Reactions: Colonel Blimp
Let's face it - if you want the best GPU performance available you buy a PC and pocket the copious change. That's been true since some time in the 1990s (...even when you could plug PCIe cards into your Mac Pro Classic, the drivers weren't always up to snuff).
I'm not sure that this has to be the case moving forward. If they can put the same GPU performance of an XBox, which has a fan and quite a large enclosure, into an iPad, which does not have a fan, then maybe, just maybe, there are ways to improve GPU performance with other tricks than simply going bigger and more powerhungry.
 
All of you need to realize that the A12Z is already easily outperforming the most powerful integrated GPU in any intel chip.

These rumors are talking about a much more powerful GPU, one that will likely outperform the 5700xt while being integrated into the SoC.

It may not beat it in raw performance but thanks to unified memory, extra tile memory, hardware accelerators and very high efficiency and utilization, it’ll give you better real-world performance.

No driver issues either.

I doubt the 12Z offers much better GPU performance than the top Ice Lake Gen11 GPU and the GPU performance from intel will increase significantly in the upcoming years.
 
I doubt the 12Z offers much better GPU performance than the top Ice Lake Gen11 GPU and the GPU performance from intel will increase significantly in the upcoming years.
The chip that got announced today? What makes you think that?
 
THe A12Z outperforms any Iris GPU by a large margin in terms of graphics (not necessarily in compute). As for Intel Xe GPUs, we don't know.
 
THe A12Z outperforms any Iris GPU by a large margin in terms of graphics (not necessarily in compute). As for Intel Xe GPUs, we don't know.

Dont know or haven't looked ?

1590010415_tigerlake_perf52_story.jpg



more confirmation benchmarks should come in but not only does the Gen 12 ( Xe-LP) GPU system have twice the number of EU (cores) as the former "iris GPU" implementations. The raster memory implementation is substantively different not also. Supports for concurrent 4K displays (with special 'hooks' into the display subsystem tapped into the memory controller subsystem. )


One reason why these initial implementations are capped at 4 cores is the number of transistors Intel has now allocated the the GPU (and associated fixed function units).


Gen 12 ( Xe-LP )


Gen 11 ( Ice Lake ) [ ice storm median around 74K ]
https://www.notebookcheck.net/Intel-Iris-Plus-Graphics-G7-Ice-Lake-64-EU-Laptop-GPU.422866.0.html
Iris Pro 580

compare Gen12 versus 580
 
Last edited:
What exactly should make me think that 12Z offers much better graphics performance?
I’m pretty sure the 12Z smokes anything integrated from Intel. If the new iGPU is roughly double the performance that would put it on par with an Apple SoC that is a couple years old now. So while the new Intel stuff is a VERY welcome step forward, I suspect it’s going to be embarrassed by Apple’s upcoming new Macs.
 
  • Like
Reactions: 2Stepfan
Well, the A12Z offers much better performance than the best intel Iris. It's like 2X faster (on GFXBench).
GFXBench, rightt
[automerge]1599076828[/automerge]
I’m pretty sure the 12Z smokes anything integrated from Intel. If the new iGPU is roughly double the performance that would put it on par with an Apple SoC that is a couple years old now. So while the new Intel stuff is a VERY welcome step forward, I suspect it’s going to be embarrassed by Apple’s upcoming new Macs.
I find it amusing you are so sure about something you can't even prove.
 
As others have mentioned, the current A12Z smokes the best iGPU from intel. It smokes the best AMD iGPUs as well.
The iPad Pro scores >50 fps in gfxbench offscreen while the RX Vega 11 does not even reach 30 fps.

The RX Vega 11 isn't the latest APU from AMD either. The A12X-12Z die beat AMD and Intel to 7-10nm densities several years ago, but those aren't respectively the "best" AMD and Intel have to offer now.

Can try to counter with Apple's A14 era GPUs are going to do better when jump to 5nm but some of that is going to depend upon Apple's transistor allocation. For the more mid range Macs they probably will 'go big" on the budget allocation (and larger still on the cache allocation ). But the iPad Pro sized SoC is probably going to be on a more limited scope. ( the gap may not be in the "smokes" range. )


"offscreen" is also a bit wishy-washy because Apple is likely tap dancing around limited memory subsystem bandwidth there. It is suppose to "even the playing field" to make it more "Apples to Apples" comparison between systems with varying screen sizes. But that is also being gamed by some GPU designers and driver implementers also.
 
Last edited:
What exactly should make me think that 12Z offers much better graphics performance?

You shouldn't care - and it won't be an A12z in production machines anyhow, it will be a A14x

In any case, in the real world, my A10x iPad runs games far smoother than my 8th Gen Intel w/UHD620

I don't know if the 14x will smoke the new intel processors, but I'd guess they will perform far better in real world, "Designed for Arm" applications e.g. FCP, Logic, and probably, Adobe Apps.

None of us know for sure what the performance gain of the next gen Apple processors will be, but I'm prepared to be impressed.
 
  • Like
Reactions: Maximara
GFXBench, rightt
So? It's the same tool used to compare both GPUs. Doesn't matter if it's not high end, the intel GPUs still perform poorly on this test.

You want to see how the current intel Iris GPUs perform on Shadow of the tomber raider? Spoiler: not good at all. They struggle to achieve 27 fps on 720p lowest (the game looks like like absolute crap at these settings) while the A12Z achieves at least 30fps at higher settings 1080p, under emulation.
 
Last edited:
  • Like
Reactions: deevey and Maximara
The RX Vega 11 isn't the latest APU from AMD either. The A12X-12Z die beat AMD and Intel to 7-10nm densities several years ago, but those aren't respectively the "best" AMD and Intel have to offer now.
What the current best AMD APU? The RX Vega 11 is rated at 1.7 TFLOPS fp32. I haven't found anything faster from AMD.
 
In any case, in the real world, my A10x iPad runs games far smoother than my 8th Gen Intel w/UHD620

I doubt Intel's 8th gen iGPUs have performance problems with mobile games.

I don't know if the 14x will smoke the new intel processors, but I'd guess they will perform far better in real world, "Designed for Arm" applications e.g. FCP, Logic, and probably, Adobe Apps.

And how would you know that if those apps won't run on Intel's new chips while enjoying the same degree of optimization?

What I'm saying is that in order to draw the definitive conclusion that X performs much better or would "smoke" Y, they have to be compared in the same environment, same applications with the same level of optimizations.
If Intel's chips will have a clear advantage in most native applications I'm pretty certain most users here would start to make excuses and this is the same as Apple's chips performing better in exclusive "Designed for Arm" applications.
None of us know for sure what the performance gain of the next gen Apple processors will be, but I'm prepared to be impressed.

Or disappointed.
 
"offscreen" is also a bit wishy-washy because Apple is likely tap dancing around limited memory subsystem bandwidth there. It is suppose to "even the playing field" to make it more "Apples to Apples" comparison between systems with varying screen sizes. But that is also being gamed by some GPU designers and driver implementers also.
The final display of the image really should not have a big impact on performance at this resolution (1080p). And why do you think it should favour Apple in particular?
The iPad pro is capable of 120 fps at 2388*1668, so it should have plenty of band-with to draw the frame buffer on screen.
 
Last edited:
What the current best AMD APU? The RX Vega 11 is rated at 1.7 TFLOPS fp32. I haven't found anything faster from AMD.
The desktop Ryzen 7 4700G.


We are taking about playable frame rates in many AAA PC games. All while using the old Vega architecture.
 
The Vega 8 is rated at 1.126 TFLops (when not overclocked, that is).
EDIT: the techpowerup database indicates a frequency of 1.1GHz. I suppose there are faster variants of this GPU.
At 2.1 GHz, it should be a nice iGPU yes.
 
Last edited:
....
You want to see how the current intel Iris GPUs perform on Shadow of the tomber raider? Spoiler: not good at all. They struggle to achieve 27 fps on 720p lowest (the game looks like like absolute crap at these settings) wile the A12Z achieves at least 30fps at higher settings 1080p, under emulation.

In the WWDC demo they made a point to show A12Z for the Word, Adobe Photoshop and Lightroom demos. The demo for the Maya and Games started off with "using Apple Silicon". That isn't necessarily the same thing.
 
So? It's the same tool used to compare both GPUs. Doesn't matter if it's not high end, the intel GPUs still perform poorly on this test.

It's not a tool that shows a definitive picture regarding the performance between the two especially when they both run in different environments.

You want to see how the current intel Iris GPUs perform on Shadow of the tomber raider? Spoiler: not good at all. They struggle to achieve 27 fps on 720p lowest (the game looks like like absolute crap at these settings)

I've also seen the Intel Iris Plus Graphics G7 run Shadow of the Tomb Raider at close to 30 fps at 1080p with custom settings.

while the A12Z achieves at least 30fps at higher settings 1080p, under emulation.

Well you obviously don't understand game benchmarking at all.
The performance in Shadow of the Tomb Raider varies greatly based on what's shown on the screen. If Apple would really have been confident about the performance of the 12Z in this game they would have Run the game's benchmark live with the FPS counter visible. They instead chose to show a small scene with Lara running through the jungle and basically nothing happening so a best case scenario in terms of GPU performance in this game. So not something that would objectively translate in: the A12Z offers much better performance than the best intel Iris.
 
  • Like
Reactions: Nicole1980
In the WWDC demo they made a point to show A12Z for the Word, Adobe Photoshop and Lightroom demos. The demo for the Maya and Games started off with "using Apple Silicon". That isn't necessarily the same thing.
They never said "A12Z", they just showed the "About this Mac" window to prove that their earlier macOS demos were using Apple Silicon. The gaming demo could have used a different model, but we have no evidence that they used different hardware for different demos.
Apple reportedly confirmed that every demo was using the A12Z, but I haven't found any confirmation myself.
 
Last edited:
I doubt Intel's 8th gen iGPUs have performance problems with mobile games.



And how would you know that if those apps won't run on Intel's new chips while enjoying the same degree of optimization?

What I'm saying is that in order to draw the definitive conclusion that X performs much better or would "smoke" Y, they have to be compared in the same environment, same applications with the same level of optimizations.
If Intel's chips will have a clear advantage in most native applications I'm pretty certain most users here would start to make excuses and this is the same as Apple's chips performing better in exclusive "Designed for Arm" applications.

Or disappointed.

Well as a comparison - Fortnight on iPad vs Intel w/UHD620

I'll let you youtube it, but bottom line it can run like a potato on Intel's integrated graphics and smooth as butter on an iPad.
 
Well you obviously don't understand game benchmarking at all.
The performance in Shadow of the Tomb Raider varies greatly based on what's shown on the screen.
I own this game, and I benchmark games all the time, I know what I'm talking about, thank you. I just played the game today and when I reach if the section played at WWDC, I'll see how it performs.
From my experience, performance in this game does not vary that much, and it doesn't appear to represent the complexity of what's on screen. The lowest performance I've seen so far was in ruins at night, when nothing particular was showing on screen expect lara. During the tsunami sequence, performance was much better.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.