Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The Vega 8 is rated at 1.126 TFLops (when not overclocked, that is).
EDIT: the techpowerup database indicates a frequency of 1.1GHz. I suppose there are faster variants of this GPU.
At 2.1 GHz, it should be a nice iGPU yes.
You can look at TFLops as much as you like what matters is real world performance.
AMD's Renoir APU are really impressive when it comes to real world graphics performance.
And the GPU is clocked at 2100ghz by default.
 
Well as a comparison - Fortnight on iPad vs Intel w/UHD620

I'll let you youtube it, but bottom line it can run like a potato on Intel's integrated graphics and smooth as butter on an iPad.
The Intel system runs the PC version of Fortnight which is different.
 
I own this game, and I benchmark games all the time, I know what I'm talking about, thank you. I just played the game today and when I reach if the section played at WWDC, I'll see how it performs.
From my experience, performance in this game does not vary that much, and it doesn't appear to represent the complexity of what's on screen. The lowest performance I've seen so far was in ruins at night, when nothing particular was showing on screen expect lara. During the tsunami sequence, performance was much better.
Now you are just making stuff up. In any game when there are a lot of things happening on the screen the FPS takes a hit and Shadow of the Tomb Raider is no different. Any benchmark run of this game on youtube will show this.
Anyway it's just 1 game, if you are such an "expert" that constantly benchmarks games surely you know that you can't objectively draw the conclusion based on a single game the A12Z offers much better performance than the best intel Iris.
 
Last edited:
I have every expectation that Feral Interactive will port games to Apple Silicon, and I’d be surprised if they’re not already playing around with the Developer Transition Kit.

Why wouldn’t they? Remember, they got their start porting games to Mac OS on PowerPC, and since then they’ve continued to port a steady flow of games to the Mac on Intel 32 and 64, to Linux, to Android, and even to the Nintendo Switch.

Nor is it much of a stretch to imagine that Aspyr will bestir themselves. The market will certainly be there for game ports to macOS on Apple Silicon, since game developers and publishers will no longer be able to rely on Boot Camp to reach Mac users.

In fact, I wouldn’t be at all surprised if this transition leads to a second golden age of Mac game porting, comparable to that of the early 2000s. The transition to Apple Silicon has the potential to be a windfall for Mac game porting specialists like Feral and Aspyr.

Of course, this transition may well take a few years to get up to steam, so in the near term I suspect you’d be right not to wait.

I'm a notch Mac user. I had an Apple IIe back in the late 70's or early 80's (don't remember the exact years) but I didn't start looking at Macs until I heard that they were planning on going to Intel chips. I never went through the PowerPC to Intel transition and my first iMac was in 2008. I thought that going Intel would bring about some game parity with Windows machines, but it didn't happen for a lot of reasons. I said in another post that I don't really game much anymore, so I don't have that expectation now anyway. I will wait to buy anything until at least the first Apple Silicone computer is released. I doubt games will be any higher on my personal priority list, but I would feel better if whatever reason Apple doesn't use industry standard graphics or developers don't use Apple standard graphics is at the very least explained if whatever that reason is isn't going away.
 
This is simply false. Apple-designed GPUs (which start at the A11) have been leaders and perf/Watt, as results show. The A12X/Z sit alone, miles ahead any other tablet GPU on GFXBench. It is certainly not "well-known that Apple GPUs are modified PowerVR IP". They use a new Apple design. Yes, their are TBDR, like PowerVR GPUs, but that's about it.
Apple having a recent licensing deal with Imagination certainly doesn't mean that Apple GPU have been disappointing. We don't know the details, but some suspect it has to do with ray tracing hardware, which Apple (and AMD/Intel) GPUs currently lack, and which Imagination has been developing.

And do you think Apple are idiots, that they will ditch AMD knowing that their own GPUs cannot compete?
Simply False ?!?! Fanboyism is good but stretching it to believe a particular point of view is also delusion. You funnilt quote Anandtech in the benchmark figures, but funnily enough leaving out the part where not only Anandtech but other key semiconductor sites like TechInsights have confirmed that current Apple GPU design is not a clean sheet design at all , but like I said, modified PowerVR design.

You just displayed your rabid fanboyism that way LOL. Not to discount Apple's efforts here. Far from it, but there's a reason in current climate there is no such thing as a clean sheet GPU design by a 3rd party, simply because of existing minefield of GPU patents held by existing players.

You quote Anandtech, but then again if you actually carefully read, you'll find Apple definitely still to this day exposes Imagination's particular patented TBDR format in its drivers, which pretty much says enough. But of course a fanboy's opinion is of greater weight than that of countless established tech and silicon insiders. LOL
 
You can look at TFLops as much as you like what matters is real world performance.
Context is important. We were comparing the Vega 8 to the Vega 11. My point was that the Vega 8 was not faster than the Vega 11, since the latter was rated at 30% more TFLOPs at techpowerup. On the same architecture, you don't expect better performance at 30% less compute power. But the techpowerup does not list more recent Vega 8 variants, which appear to be faster.

Now you are just making stuff up. In any game when there are a lot of things happening on the screen the FPS takes a hit and Shadow of the Tomb Raider is no different. Any benchmark run of this game on youtube will show this.
Anyway it's just 1 game, if you are such an "expert" that constantly benchmarks games surely you know that you can't objectively draw the conclusion based on a single game the A12Z offers much better performance than the best intel Iris.

I have provided two bases of comparisons. GFXBench, where the A12Z is about 2X faster than any intel Iris in the various tests, and SoTR. The latter is not a "direct" comparison yes, but the numbers we have so far suggest that the A12Z also beats any current intel Iris by a wide margin (27 fps at 720p lowest on the benchmark tool for the Iris vs 30+ fps at 1080p at probably "low" settings on a different scene for the A12Z). These two results are consistent with each other.
What's your own basis of comparison between the A12Z and the intel Iris? None.

And have you played SoTR before accusing me of making stuff up? I was hitting the low 30 fps when Lara was alone in the ruins while the benchmark scores 45+ fps average at my settings. I'm not saying that I could not hit much higher performance elsewhere and I'll let you know when I'm at the section demoed at WWDC. From what I've seen so far in the jungle, I doubt this part will run much smoother than the built-in benchmark sections.
 
Last edited:
Simply False ?!?! Fanboyism is good but stretching it to believe a particular point of view is also delusion. You funnilt quote Anandtech in the benchmark figures, but funnily enough leaving out the part where not only Anandtech but other key semiconductor sites like TechInsights have confirmed that current Apple GPU design is not a clean sheet design at all , but like I said, modified PowerVR design.
Show some concrete evidence instead of calling someone a fanboy, will you? Show me the quotes or the links to those "key semiconductor sites". When I see the evidence, I will gladly admit my error. So far, I haven't seen it. What I've seen is realworldtech saying this, even before the A11 was developed:
The overall result is that while Apple’s GPU shares some heritage with PowerVR, it is a unique and proprietary design. It is a world-class design with impressive performance and power efficiency; the A9 processor has the best score on nearly every mobile graphics benchmark, and the A10 Fusion is 40-50% faster still.
It's my understanding that Apple dropped the PowerVR "heritage" (fixed function hardware) with the A11.
And they go on saying this about the two architectures:
Comparing the available details for the two make it clear that they are very distinct.

And then there's anandtech quoting an Imagination Tech press release (which appears to be removed from Imagination's website):
Furthermore the GPU design that replaces Imagination’s designs will be, according to Imagination, “a separate, independent graphics design.” In other words, Apple is developing their own GPU, and when that is ready, they will be dropping Imagination’s GPU designs entirely.
So you think Apple lied to Imagination and continued on the PowerVR design?
Also, Imagination asserted at that time that it would be very hard for Apple to come up with a design that does not use Imagination's IP. Basically, they warned Apple about a future lawsuit. Don't you think they would have launched their lawsuit if Apple had actually used their design?

Anyway, "modified" could mean anything. Yes, Apple GPUs are TBDR, and IM Tech invented the concept of TBDR. So I suppose you could say that the A12/13 are "just" modified IM Tech GPUs.

But what does this discussion have to do with? Apple could still use PowerVR designs, it doesn't change the fact that Apple GPUs are best in class (among mobile GPUs). No tablet GPU comes close to the A12Z. And somehow you think that "Apple signed a new licensing agreement which pretty much says that Apple's custom GPU efforts were largely for naught."
What an outlandish claim make. Would you reconsider your argument in the light of performance numbers?
 
Last edited:
Context is important. We were comparing the Vega 8 to the Vega 11. My point was that the Vega 8 was not faster than the Vega 11, since the latter was rated at 30% more TFLOPs at techpowerup. On the same architecture, you don't expect better performance at 30% less compute power. But the techpowerup does not list more recent Vega 8 variants, which appear to be faster.

My point was that talking about TFLOPs is not important. Apple's GPU won't match anytime soon the level of optimization AMD and Nvidia GPUs get in AAA games so TFLOPs is an irrelevant metric in such a context.

I have provided two bases of comparisons. GFXBench, where the A12Z is about 2X faster than any intel Iris in the various tests, and SoTR. The latter is not a "direct" comparison yes, but the numbers we have so far suggest that the A12Z also beats any current intel Iris by a wide margin (27 fps at 720p lowest on the benchmark tool for the Iris vs 30+ fps at 1080p at probably "low" settings on a different scene for the A12Z). These two results are consistent with each other.
What's your own basis of comparison between the A12Z and the intel Iris? None.

I don't have to provide anything when my point (which you didn't dispute anyway) was that a Mobile Benchmark and one small fragment of a single game are not enough to draw a definitive conclusion that the 12Z is 2X fast, of much faster than Intel's fastest current iGPU.

And have you played SoTR before accusing me of making stuff up? I was hitting the low 30 fps when Lara was alone in the ruins while the benchmark scores 45+ fps average at my settings. I'm not saying that I could not hit much higher performance elsewhere and I'll let you know when I'm at the section demoed at WWDC. From what I've seen so far in the jungle, I doubt this part will run much smoother than the built-in benchmark sections.

Well I did say that performance varies in this game based on the played section or what's shown on the screen which in the end you also confirmed.
 
My point was that talking about TFLOPs is not important. Apple's GPU won't match anytime soon the level of optimization AMD and Nvidia GPUs get in AAA games so TFLOPs is an irrelevant metric in such a context.
The TBDR architecture from Apple is particularly efficient at rasterising tasks, not particularly at compute tasks (TFLOPs). This is kinda opposite to the Vega architecture, which boast a lot of FLOPs for disappointing results in games. Nvidia GPUs score much higher for the same number of TFLOPS because they make much better use of the caches. Their GPUs use a tiled-based rasterising approach which is reminiscent of TBDR. But it is not full-fledged TBDR, and developers cannot take advantage of it in their code.
Apple has been tailoring Metal for their own GPUs. It is the only company that masters the whole graphics stack, form the hardware to the API, including the whole OS. This stack should be as efficient and optimised as possible. Now, it will be up to developers to take advantage of it.
Well I did say that performance varies in this game based on the played section or what's shown on the screen which in the end you also confirmed.
I'm not denying that. I also agree that one game might not be representative. But the evidence we have, even if it's meager, does not contradict the claim that Apple GPUs will be competitive (at a given level of power consumption).
 
Last edited:
I wonder how old that the source of that info is, but according to this article; AMD drivers for macOS Big Sur namedrop third-gen RDNA hardware Big Sur still contains code for AMD's RDNA3 chips. That's still 2 generations ahead!

I'm not disputing consumer focussed Macs would probably be the first in line to have Apple powered graphics (probably SOC) but I don't think Apple would drop support for AMD that soon, if at all. That would leave Pro users left in the dark.
Their graphics would atleast have to match the best that's available from AMD at that moment in order not to alienate Pro customers.

I agree 100% with this. I have no doubt they can do better at the low end. But in the same way AMD fail to create mobile chips, i'm not sure Apple can just strut into the multi-core pro market place and replace Ryzen power but not only are they suggesting they'll do that but also blow away the AMD graphics cards they've been struggling themselves to improve for years.

I suppose they don't need to be as good as a 3080 or 3090 for instance but with Big Navi coming up and more advances in the 2 years it's going to take to build this iMac properly for pro users they'll need to be getting some serious performance from graphics.
 
Then get ready for disappointment:


("No direct booting == no bootcamp")

...although we don't know yet whether that means that Apple will be actively blocking that or simply not supporting it - but even then there's no guarantee that any existing OSs will be able to run "bare metal" on an AS Mac - there's more to computer "architecture" than just the processor instruction set and, at the very least, an OS will need Windows and/or Linux drivers for Apple Silicon graphics, storage, sound, networking... Bear in mind that Intel Macs basically are PCs whereas we don't know what AS Macs are going to have in common with other ARM systems.

We do know that there will be virtualisation support for ARM Linux since Apple showed Parallels running Debian Linux at WWDC. A hypervisor like Parallels can act as a bridge between guest OSs written for other ARM hardware and the "real" hardware supported by the MacOS drivers. So whether you can run Windows 10 for ARM in a VM on AS Macs will largely depend on whether Microsoft deigns to license it (last I looked, it wasn't offered as a consumer product, only bundled with a Surface Pro X). Since AS Macs will outnumber ARM-based Windows PCs approximately 5 minutes after they go on sale, I suspect Microsoft will want in unless they get into a... water-passing contest with Apple over something. Win10 for ARM has an x86 emulation (currently 32-bit only, 64 bit supposedly in the works) so that will probably be your route for running x86 Windows software (...and should be far more efficient than trying to run the whole of x86 Windows under software-emulation).

As for x86 Windows... it will be a choice between full software emulation (Ahh, back to the good old days of watching SoftWindows slowly grind away - although emulation has come a long way since then) or virtual desktop talking to a "real" x86 in the cloud.

Still, you've got another 3 years or so - before you have to jump to AS - to either kick the Windows habit, buy a PC or spin up a PC in the cloud.

Some professional software -especially in the medical field- is ONLY available for windows, and up until now, the iMac was simply the best windows machine. If Apple does not offer an "iMac classic" with intel inside, people including me will have to move to Microsoft Surface (pricey option) or other All in ones like Dell. Sad. Bad move, Apple!
 
  • Like
Reactions: Nicole1980
Sad, but that type of usage is really niche.
It would cost more for Apple to maintain an Intel model than it would benefit them.
 
  • Like
Reactions: guillone
Some professional software -especially in the medical field- is ONLY available for windows,

Well, yes, because the demand for such specialist software on Macs is negligible. Maybe they'll at least produce a Windows-on-ARM version or maybe it will work with the x86 translator on Windows for ARM. ...but also, I'd give it an even chance that in ~3 years' time you'll be required to access such software online (either as a web service or via remote desktop to a virtual windows box in the cloud) because "security" (which is how the adminisphere spells "record keeping, accountability and liability").

the iMac was simply the best windows machine

Well, maybe it was the best Windows all-in-one - but once you drop the "all-in-one" requirement (one option is to bolt a SFF to the back of a display - that seems to be what Dell are offering at the moment) the Windows world offers vastly better choice and value-for-money, not to mention the ability to upgrade your PC without throwing away your display (and vice-versa). I've never understood what makes people buy Mac hardware if their primary requirement is Windows - the ability to run Windows has certainly been useful in the past (decreasingly so, in my experience) but buying a Mac has long meant paying a premium, and putting up with restricted choice, for the privilege of using MacOS.

If there were demand for Windows all-in-ones there would be more all-in-ones. In the Mac world, Apple has created an artificial demand for all-in-ones by refusing to make a regular desktop (or even an Apple-branded display to match the Mini). Personally, I have an iMac - not because I want an all-in-one but because I want MacOS on a reasonably powered desktop, and that was all Apple had to offer. The Surface Studio would make the iMac look like a bucket of spare parts if MS hadn't lumbered it with pathetic specs (but then MS are making computers with one hand tied behind their back - they can't afford to compete with some of their biggest customers: other PC makers).

people including me will have to move to Microsoft Surface (pricey option) or other All in ones like Dell.

...and if Apple don't produce a decent desktop option (be it AS or Intel), people including me will have to buy PCs anyway. Now, I don't think the switch to AS is going to make Apple produce the fabled xMac but an AS Mac Mini that wasn't knobbled by Intel's lowest-common-denominator iGPU would certainly be a contender. Meanwhile, AS certainly has the potential to make better all-in-ones with better graphics and silent running.

It is far more important for Apple to make the best Mac than the best Windows machine (the world already has more PC clone makers than it needs) - it's just that we've had a decade or so when those two objectives lined up. Now, the x86* is a shackle that Windows can't cast off because of its obsession with legacy support, but Apple can. It is now up to Apple to deliver AS Macs that are better than Intel Macs - we'll have to wait and see (but the performance of the A12/A13 chips suggests that it's achievable).

* it's fundamental: any x86 implementation has to carry around a ton of extra circuitry to translate x86 CISC code into RISC-like micro-ops, on top of the RISC core that runs those. Other ISAs will always be able to cram more cores into the same size/power envelope - the only way x86 can beat ARM/RISC-V/whatever is for Intel to maintain a huge lead in fabrication technology, which it has now lost. ARM has been the most widely used CPU for years now - giving it the biggest development budget - and x86 is stuck in a shrinking pond of 'traditional' PCs, and even there software is becoming increasingly processor-independent. Wouldn't surprise me if Intel didn't get into the ARM (or some other new ISA) game soon (actually, they've made ARM chips in the past - the StrongARM that they inherited from DEC - might still be making them for all I know - I wonder if they've still got the license?)
 
  • Like
Reactions: Maximara
Some professional software -especially in the medical field- is ONLY available for windows, and up until now, the iMac was simply the best windows machine. If Apple does not offer an "iMac classic" with intel inside, people including me will have to move to Microsoft Surface (pricey option) or other All in ones like Dell. Sad. Bad move, Apple!
You are kidding right? As much I love the Mac I will admit that as a windows machine goes it is relatively piss poor until you hit the Mac Pro line. Thanks to the iMac's design the Intel CPU is more subject to heat throttling than a traditional "case" PC with the exact same spec which logically makes it inferior windows machine.

The ARM chip is going to change that and I suspect Microsoft will try to cut a deal with Apple to make Windows on ARM more viable (last I read Microsoft's emulator could handle 32-bit code but not 64-bit which is one of the reasons efforts to make ARM PC sale has been a dud)
 
I was hitting the low 30 fps when Lara was alone in the ruins while the benchmark scores 45+ fps average at my settings. I'm not saying that I could not hit much higher performance elsewhere and I'll let you know when I'm at the section demoed at WWDC
So, I have finally reached that section. Overall frame rate is slightly lower (-5%) than the built-in benchmark average fps at high resolution. At lower resolution, where the game is not GPU-bound (unlikely to be the case for an iGPU), the difference is higher (about -15%).
I tried at various settings, including "low" without AA, which I suspect being those used at WWDC, although I find colours to be more realistic on the WWDC video than on my computer at those settings (my Mac is on the right).
Capture d’écran 2020-09-04 à 20.04.37.png
May be they used higher settings, but texture filtering appears quite low.
EDIT: their settings are definitely higher that the "low" preset. At low settings, I see spider webs and other details pop-in when Lara comes close, while they're visible from a distance on the WWDC video. So the LoD was at least "Normal".
I'm also almost certain that BTAO was on, and that pure hair was "Normal". So we're looking at mostly "medium" settings, with texture and texture filtering set to low.

My point is, the WWDC demo did not use a game section that runs particularly well on Metal, and if it was indeed running on the A12Z, then it may be the best iGPU for this game, which was not even optimised for that hardware. Note that the WWDC video is 30 fps and there is no dropped frame except the very moment when she hits the water, so we're talking more than 30 fps average if frame rate were not capped.
 
Last edited:
I have no idea why people keep chasing after Apple and a Mac for their gaming needs. I have owned Macs since 1989 and enjoyed some games along the way, but never deluded myself into thinking a Mac will ever be a substitute for a console or a gaming PC or thought Apple would suddenly make that market a major point of focus and investment. It’s just not them.
Well i have an imac with the vega 48 and there isnt one high end game that hasnt been buttery smooth at 2560x1440. So personally I dont see why people keep saying macs 'cant' be decent gaming machines
 
  • Love
Reactions: Colonel Blimp
Sad, but that type of usage is really niche.
It would cost more for Apple to maintain an Intel model than it would benefit them.
With some of these moves that apple makes, I often hear people say 'well thats only 5% of the buyers theyre losing, so no big deal", then another change "well thats only 10% of the buyers who want that, so no big deal". And so on and so on ..
But eventually those small pieces of the mac buyers that apple keeps peeling off with their moves eventually adds up.
 
Supposedly AMD's RDNA 2 cards, which will be launched alongside the the PS5 and Xbox Series One, will be around around 150% faster then Nvidia's current line up. Kinda sceptic about those claims myself but we'll see!
 
I doubt the 12Z offers much better GPU performance than the top Ice Lake Gen11 GPU and the GPU performance from intel will increase significantly in the upcoming years.
If you consider power in that "GPU performance" equation then Apple Silicon is going to eat Intel for breakfast, lunch, and dinner.
 
With some of these moves that apple makes, I often hear people say 'well thats only 5% of the buyers theyre losing, so no big deal", then another change "well thats only 10% of the buyers who want that, so no big deal". And so on and so on ..
But eventually those small pieces of the mac buyers that apple keeps peeling off with their moves eventually adds up.
This makes the error of assuming there it no overlap between those groups.
 
Either way, as you keep narrowing the platform you keep losing potential customers. Seems obvious to me.
And they'll probably gain more customers by making Macs with better performance, battery life and features (like the ability to run any iOS app).
Maintaining 2 APIs indefinitely really doesn't make sense, and it would confuse users as some apps would not be compatible with all Macs.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.