Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
tile-based deferred rendering and immediate mode rendering aren’t incompatible. Where have you heard this?

this have no impact if you use an another GPU. Intels latest Xe iGPU is tile based with full compatibility.
If apple doesn’t provide a GPU that can compete on parity with a 3090 they will kill the entire Mac lineup if they won’t allow external GPUs
IIRC both AMD and Nvidia support tile based rendering. But that does not make them deferred. They are still immediate renderers.
 
Indeeed. It’s only because apple decided not to support a dedicated GPU. Nothing about it being tile based or immediate affects comparability .
For compute the style of renderer doesn't matter, so yeah they could address both as if they are 1 "big" GPU. For actual rendering, I don't think the API could do both GPU's at once unless it forces the Apple GPU to render in IMR.

To be clear I am saying I don't think Apple supports mGPU with differing types of rendering engines for rasterization in Metal. (to be fair, I don't think D3D does either)
 
Well apple already have eGPUs and mGPUs that have full support for their Metal API.
Essentials it’s an artificial limit apple have implemented ether purposefully or by neglecting it
 
He‘s trying to find an email address so he can demand that someone else do a bunch of research for him?
Yes, basically because I don’t have the tools (or the knowledge) the Anandtech Team have, for testing if the A15 SoC has VP9 or AV1 hardware encoding and decoding. I’m not a professional reviewer, unlike them. Just like the SSD type, I don’t have the tools to know whether an SSD is using TLC or QLC NAND type.

So YES, I was politely asking (not demanding) if someone had his email to, again politely ask (not demand) the AnandTech Team to take a look on that, instead of me. That’s part of their work, researching about the latest tech.

If, at this point, you don’t see the difference between demanding and -politely- asking something in a tech forum -like this one, or the AnandTech one-, I think you’re just purposely bending my words to attack me. I didn’t expect this from a well known user like you, but one thing is to have great knowledge about silicon, like it seems it is your case, and a different one is to treat others respectfully. I’m dissapointed.
 
Last edited:
No, they aren’t a hybrid. There is no such thing as a “RISC” back end. These are not chips with some sort of instruction translator that translates to pure risc instructions and then sends them to a risc CPU. The microcode instructions do not solve CISC’s problems, and the microcode instruction stream is not at all RISC. CISC complexity is found throughout the entire pipeline, in every unit. Nobody who actually designs CPUs would describe x86 chips as anything other than CISC.

And ARM is truly RISC. The RISC philosophy has nothing to do with the number of instructions, but with the complexity of instructions, where complexity is defined in a very specific way (only read or write to a register except for limited LD/ST instructions, no variable length instructions (though multiple fixed instruction lengths are permissible), no complicated memory addressing, etc.

I designed chips at AMD, Exponential (PowerPC), Sun (SPARC), etc. Nobody I ever worked with would consider calling ARM CISC or x86-64 RISC (or “hybrid”).
I've heard about Skylake being VLIW inside when the tight loop SMT bug was brought up by server farms.
Is that true? Or is it a special case or special mode for some instructions?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.