Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I don't know about precision, but why would this cross-platform benchmark tool favour Metal in particular? At worse, Metal may lack features that are available in DX or Vulkan, which can only decrease performance.

The GFXbench database is quite informative. We see the A12X trashing any other iGPU from AMD or intel.
It's still a simplistic benchmark tool, but it's about the only one that runs on mobile and PC. Geekbench can use the GPU, but only for compute, AFAIK.
 
I don't know about precision, but why would this cross-platform benchmark tool favour Metal in particular? At worse, Metal may lack features that are available in DX or Vulkan, which can only decrease performance.

The GFXbench database is quite informative. We see the A12X trashing any other iGPU from AMD or intel.
It's still a simplistic benchmark tool, but it's about the only one that runs on mobile and PC. Geekbench can use the GPU, but only for compute, AFAIK.
half precision shaders are faster. If the desktop score are for full precision then the results aren’t comparable. Sadly Anandtech didn’t appear to follow up (or ask the developer) so we don’t know.

edit: Metal supports both precisions but it appears to default to fast math for shaders, which seems to me to be half precision. A dev can correct me if I am wrong (probably am, lol).
 
Last edited:
These consoles have x86-64 (CISC) CPUs.

isn't the heavy lifting done by the GPU?
[automerge]1594994992[/automerge]
Food for thought... Nintendo has been using RISC processors. One would wonder why studios didn’t port a lot of their games from their PS/PC/Xbox catalogs

actually they have, Doom, Skyrim, Dark Souls, The Witcher 3,Assassin Creed,
 
Last edited:
isn't the heavy lifting done by the GPU?
take with salt (grain).

It is a combined effort, you need a performant enough CPU not to get in the way of the GPU. You need a GPU to calculate and render the images. You also have to efficiently move data between the CPU and GPU (large blocks of data) efficiently - which with external graphics solutions can be itself a bottleneck.

GDDR6 is better for higher bandwidth data transfer (graphics memory), DDR is better in when it comes to smaller read/write for latency.

From what I read the next gen gaming consoles from both Microsoft and Sony have a similar design - neither use an external discrete GPU... but use a Ryzen (8 core) CPU and a AMD GPU on the same chip or package and communicate through shared memory (16GB of GDDR6).

We have yet to know how Apple will implement it but they could maybe use the same technique but instead of the Ryzen cores they would have maybe a 12 core (8 performance cores and 4 efficiency cores [efficiency peaks out at about 70% of a performance core I believe]) and GPU with shared memory with the graphics processor. In Apples case they will likely have to design a solution for the CPU to have access to both GDDR6 memory for shared access - and DDR4 for application data and figure out what goes where :eek:.... (I am not a chip designer though - so I am likely wrong about how they will do it). Since this is on a 5nm chip they will have more performance or cramming more in (assuming TSMC can product the same size chips as the the 7nm). Apple of course if they run out of space on one chip could go with a chiplet design like AMD and cram more silicon in. (GPU functions on one chip, CPU functions on another and the GDDR6 memory accessible by both and I am guessing some sort of mediation between the two). Apple will of course err on it being more performant for video creatives, and hopefully work out well for gamers as well. Apple has an exceptional chip design team - with a rather large team - so we really can only dream about what they will do and wait to see what they come out with. I have no doubt their laptops will exceed the current generation by a substantial margin - they will need that or the question will be - why disrupt your clients if it is not beneficial to them (not the future, the 1st generation)... and that is why they will have more than likely already know they will exceed expectations considerably.
 
half precision shaders are faster. If the desktop score are for full precision then the results aren’t comparable. Sadly Anandtech didn’t appear to follow up (or ask the developer) so we don’t know.
Ok, I had not read the Anandtech page in detail. GFXbench may indeed favour mobile GPUs against PC GPUs.
 
People use iPad Pro for playing fortnite at 120hz. The integrated GPU in the A12 is really quite powerful. Similar to Xbox one of PS4 from what I understand.

Expect the A14 to be much more powerful.

many machines are capable of pushing out 120fps if you turn the settings low enough.

fortnite is also not a game to benchmark against. it's...not used in....anyone's benchmarks for that matter.

you wanted to make a point and used a disastrously awful example.
[automerge]1595032906[/automerge]
lso I could see an Apple TV focused box that has the performance of a console

good luck with that. the PS5 (especially) and XBSX and pushing major improvements to architecture that PCs will be missing for a little while.

if you said "has the performance of something better than a nintendo switch but still inferior to other consoles" then you would be correct.
 
many machines are capable of pushing out 120fps if you turn the settings low enough.

fortnite is also not a game to benchmark against. it's...not used in....anyone's benchmarks for that matter.

you wanted to make a point and used a disastrously awful example.
[automerge]1595032906[/automerge]


good luck with that. the PS5 (especially) and XBSX and pushing major improvements to architecture that PCs will be missing for a little while.

if you said "has the performance of something better than a nintendo switch but still inferior to other consoles" then you would be correct.

It missing options for PCs is a function more of being the first device to use certain components that will have a more limited supply until manufacturing ramps up. The 'architecture' is all about what you decided to fit on the silicon - and there is no reason why Apple chips could not make similarly design decisions - the fact that the GPU and CPU have shared graphics memory for those machines is exactly what Apple will likely do for their CPUs. It all depends on decisions Apple has made (but we are not privy to).

I am also not saying it has to be sold as a PS5 or XBox alternative, but still approaching it from a sales side of - it is an Apple TV but more powerful and less restrictive for higher end games. It would be a way of limiting the expectations - which you could always exceed while having a more powerful box that can handle higher games (80/20 rule). You don't need actually need to be the best, but you can be good enough for the AAA games to run well on... but your selling point is ... this is an entertainment center for the adults - that has more power and can be used for gaming for 'the kids' (even if the kids are adults).

Basically at it's core it would have the same CPU/GPU configuration (in 5nm vs PS5 at 7nm -- which allows for more to be packed into the thermal envelope) and the size of that SoC is dependent on the enclosure thermals.

Brainstorming here...

You could even reduce the overall cost/R&D costs for the machines by making the Mac Mini and the Apple TV have the same base components on it. The RISC CPU/GPU with shared memory model (similar to what AMD did), motherboard, an enclosure and powersupply - would be the same for both lines. The options for SSD would be the same cross the lines... The one aimed for being a Mac Mini would have additional options for more DDR4 memory.
You would have the Mac Mini Nano = Apple TV 4K (with more storage and more memory options), Mac Mini = Apple TV ?, and then a Mac Mini Pro (which is fully packed with features and a larger enclosure for heat / 10GB ethernet options, options to ramp up memory more and a more powerful GPU built in) - which at it's base is the same as the Apple TV Pro. The TV OS or Mac OS should be able to run on either system -- dependent on minimum configurations to support it. The TV app on MacOS could be expanded to be effectively the TV OS running underneath it but with the games showing installed as MacOS apps (Universal installation).
 
good luck with that. the PS5 (especially) and XBSX and pushing major improvements to architecture that PCs will be missing for a little while.
I don't see why Apple could not design a console matching these, if they wanted to (which I don't think they do).
Apple already makes excellent CPUs, they have a neural engine to accelerate functions, the architecture of Apple GPUs is clearly more power-efficient than AMD's, they're probably working on ray-tracing hardware (hence their partnership with Imaginaton Technologies), they design their own SSD controllers and all.
 
Does Apple have a DirectStorage equivalent?
I don't think so, but this is something they could develop if they decided to make their own game console, given the experience they have in designing custom hardware. AFAIK, this DirectStorage is hardware decompression of games assets, it's not rocket science.
 
Does Apple have a DirectStorage equivalent?
My understanding about DirectStorage is that it is a DirectX API to load SSD artifacts directly into the GDDR6 memory (or something to that effect) -- which is shared memory between the CPU and GPU. The normal path would require a path of the CPU/SSD controller loading the artifacts into DDR memory (which is optimal for random access / lower latency - rather than large contiguous access of GDDR). This would then generally transfer to the GDDR memory on the Video Controller via the PCIe bus (which is comparatively slow, and not energy efficient. I believe Apples render is a Tile Based renderer which is designed to reduce the external bandwidth - which is beneficial as as much as the Video Controller benefits from pure computational power -- it is bottleneck-ed because of the external bandwidth between components. The more you can get the CPU / GPU on the same package (and be energy efficient to allow more power before overheating) the more you git rid of these bottlenecks and the much higher performance you will get irrespective of the raw computational performance of the GPU. I am sure Apple is looking at everything that will increase efficiency and improve performance while reducing heat etc. We will have to wait until Apple releases their designs, and what annual design changes implement... but now that Apple has control of all the major components - they will be much more able to improve the overall design of the computers... something that will be problematic with Intel based designs now. Being able to do this not only benefits gaming but also video creatives where you might want to load raw or compressed footage from external storage (SSD) and apply transformations etc. to that footage as it is streamed through.

A component design with Intel chips (or interchangeable CPUs/GPUs) would likely not receive an architectural advantage like this for the foreseeable future.


Memory Bandwidth Requirements of Tile-Based Rendering

Abstract

Because mobile phones are omnipresent and equipped with displays, they are attractive platforms for rendering 3D images. However, because they are powered by batteries, a graphics accelerator for mobile phones should dissipate as little energy as possible. Since external memory accesses consume a significant amount of power, techniques that reduce the amount of external data traffic also reduce the power consumption. A technique that looks promising is tile-based rendering. This technique decomposes a scene into tiles and renders the tiles one by one. This allows the color components and z values of one tile to be stored in small, on-chip buffers, so that only the pixels visible in the final scene need to be stored in the external frame buffer. However, in a tile-based renderer each triangle may need to be sent to the graphics accelerator more than once, since it might overlap more than one tile. In this paper we measure the total amount of external data traffic produced by conventional and tile-based renderers using several representative OpenGL benchmark scenes. The results show that employing a tile size of 32 × 32 pixels generally yields the best trade-off between the amount of on-chip memory and the amount of external data traffic. In addition, the results show that overall, a tile-based architecture reduces the total amount of external data traffic by a factor of 1.96 compared to a traditional architecture.
 
I don't think so, but this is something they could develop if they decided to make their own game console, given the experience they have in designing custom hardware. AFAIK, this DirectStorage is hardware decompression of games assets, it's not rocket science.
Yeah aside from the Xbox Velocity Architecture Microsoft has been quiet on its storage and IO features. Sony put out a ton of information but their solution is pretty esoteric. If no loading is really achieved in next gen that could allow for some interesting games.
 
take with salt (grain).

It is a combined effort, you need a performant enough CPU not to get in the way of the GPU. You need a GPU to calculate and render the images. You also have to efficiently move data between the CPU and GPU (large blocks of data) efficiently - which with external graphics solutions can be itself a bottleneck.

GDDR6 is better for higher bandwidth data transfer (graphics memory), DDR is better in when it comes to smaller read/write for latency.

From what I read the next gen gaming consoles from both Microsoft and Sony have a similar design - neither use an external discrete GPU... but use a Ryzen (8 core) CPU and a AMD GPU on the same chip or package and communicate through shared memory (16GB of GDDR6).

We have yet to know how Apple will implement it but they could maybe use the same technique but instead of the Ryzen cores they would have maybe a 12 core (8 performance cores and 4 efficiency cores [efficiency peaks out at about 70% of a performance core I believe]) and GPU with shared memory with the graphics processor. In Apples case they will likely have to design a solution for the CPU to have access to both GDDR6 memory for shared access - and DDR4 for application data and figure out what goes where :eek:.... (I am not a chip designer though - so I am likely wrong about how they will do it). Since this is on a 5nm chip they will have more performance or cramming more in (assuming TSMC can product the same size chips as the the 7nm). Apple of course if they run out of space on one chip could go with a chiplet design like AMD and cram more silicon in. (GPU functions on one chip, CPU functions on another and the GDDR6 memory accessible by both and I am guessing some sort of mediation between the two). Apple will of course err on it being more performant for video creatives, and hopefully work out well for gamers as well. Apple has an exceptional chip design team - with a rather large team - so we really can only dream about what they will do and wait to see what they come out with. I have no doubt their laptops will exceed the current generation by a substantial margin - they will need that or the question will be - why disrupt your clients if it is not beneficial to them (not the future, the 1st generation)... and that is why they will have more than likely already know they will exceed expectations considerably.

Wikipedia says that the GPU of PS5 is AMD RDNA 2 which is based on the RDNA 1 which is:
"It is likely to be RISC SIMD (or rather SIMT) microarchitecture.[citation needed] It is manufactured and fabricated with TSMC's 7 nm FinFET graphics chips used in the Navi series of AMD Radeon graphics cards"
 
Wikipedia says that the GPU of PS5 is AMD RDNA 2 which is based on the RDNA 1 which is:
"It is likely to be RISC SIMD (or rather SIMT) microarchitecture.[citation needed] It is manufactured and fabricated with TSMC's 7 nm FinFET graphics chips used in the Navi series of AMD Radeon graphics cards"
I did not bother naming the GPU since it would take an effort to search and name it and it was not really important to the point that I was trying to make and it was not going to be used as the Apple GPU as far as anyone has indicated. The point was that the consoles are going towards a 'single chip' (I take that with a grain of salt since sometimes the press is not very accurage - so it could be same package (chiplet) or same chip - using shared memory (GDDR6 - which since it is 16GB ... I am guessing is not on the SoC itself). We will of course have to wait to see what Apple puts on their silicon... right now we are all just fantasizing about what could be. I do believe that Apple will not hold back much on the first revision used in the Macbook... as they have to make sure it outperforms what they are replacing... as a way of making sure they control the press in a positive way (which is a reason why they picked a Pro model to be in the first wave).
 
I don't see why Apple could not design a console matching these

they could, but that's not the point. macs currently have the same architecture issue that any other consumer computer has vs the upcoming console.

linus takes the time to explain this a bit in his apology video to epic games ceo, and there's a game dev who goes into deep detail analyzing sony's tech dive presentation on the new developments.

also an apple gaming console would be unnecessarily expensive and stand no chance against its market competitors who are deeply established.
 
they could, but that's not the point. macs currently have the same architecture issue that any other consumer computer has vs the upcoming console.

linus takes the time to explain this a bit in his apology video to epic games ceo, and there's a game dev who goes into deep detail analyzing sony's tech dive presentation on the new developments.

also an apple gaming console would be unnecessarily expensive and stand no chance against its market competitors who are deeply established.
Saw Linus' video a while back -- and I am very impressed with his integrity in taking responsibility when he makes mistakes.

Any computer which is where the manufacturer does not have control or influence to change designs of the components they use for that hardware -- will not be easily able to implement some architectural changes that would be generally positive. When you are mixing components designed by different manufactures (of the shelf designs basically with at most limited customization) there will always be some cruft, overlap etc. from each manufacturers own priorities. It is one of the beneficial aspects of Apple bring all it's impactful system components under it's own design team -- as Apple can make changes to them and not worry about who is using them and for what purposes and what type of impact it will have on customers. Their focus is not on the components priorities but the priorities of the user devices. I expect the first few years annual releases -- the hardware designers at Apple will have a field-day removing bottlenecks that affect performance and efficiency. It is one of the reasons why I am stoked about the transition.
 
you wanted to make a point and used a disastrously awful example.

Blimey, hope you feel good about yourself.

TVs don’t refresh at 120hz. Maybe there are some super high end ones that do, I don’t know. But there isn’t really any point having a console that can render a game at 120hz because people don’t have the kit to see it.

iPad Pro can.

If you’ve finished pulling your head out of your back side, maybe you’ll be able to think about not being a total knob before posting next time.
 
Blimey, hope you feel good about yourself.

TVs don’t refresh at 120hz. Maybe there are some super high end ones that do, I don’t know. But there isn’t really any point having a console that can render a game at 120hz because people don’t have the kit to see it.

iPad Pro can.

If you’ve finished pulling your head out of your back side, maybe you’ll be able to think about not being a total knob before posting next time.

do a bit more research before you pretend like you have an idea what you're talking about, and simultaneously trying to make out other people to be the idiot.

"maybe there are some super high end ones that do, i don't know"

you're absolutely right about one thing: you are clueless. you don't need a super high end TV available on the market right now to get 120hz and VRR compatibility along with gsync support.
 
do a bit more research before you pretend like you have an idea what you're talking about, and simultaneously trying to make out other people to be the idiot.

"maybe there are some super high end ones that do, i don't know"

you're absolutely right about one thing: you are clueless. you don't need a super high end TV available on the market right now to get 120hz and VRR compatibility along with gsync support.


most tvs that advertise 120hz+ aren't really 120hz. while you can find some mid-low range claiming they do, you can't really trust them
 
most tvs that advertise 120hz+ aren't really 120hz. while you can find some mid-low range claiming they do, you can't really trust them

that's nice. i was not describing most.

and when purchasing tech i dont do so on the basis of trust, i do so by spec sheet. trust really means absolutely nothing when fact about a product is readily available. something either does or does not offer a feature, there is no in between that fancy marketing can distract me with.
 
that's nice. i was not describing most.

and when purchasing tech i dont do so on the basis of trust, i do so by spec sheet. trust really means absolutely nothing when fact about a product is readily available. something either does or does not offer a feature, there is no in between that fancy marketing can distract me with.


well enlighten us with some affordable tvs with true 120hz.
 
What do you consider affordable?
It is different things to different people... so just the best deal (price and model to verify that it is true 120hz)... so 'we' can go onto arguing about it being true 120hz or the definition of affordable... 🤣
 
It is different things to different people... so just the best deal (price and model to verify that it is true 120hz)... so 'we' can go onto arguing about it being true 120hz or the definition of affordable... 🤣
LG C9 & CX support 4k120. I'll let others argue affordability in an Apple forum, lol.
 
LG C9 & CX support 4k120. I'll let others argue affordability in an Apple forum, lol.


yeah. i've thought about getting that but then there is the burn-in. i think it would be great for most things, but i do play games with static HUDs. doesn't seem ideal for a gamer.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.